91精品国产91久久久久久_国产精品二区一区二区aⅴ污介绍_一本久久a久久精品vr综合_亚洲视频一区二区三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP3340代做、代寫Python/Java程序
COMP3340代做、代寫Python/Java程序

時間:2025-03-15  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP3340 Applied Deep Learning The University of Hong Kong
Assignment 1
Feb 2025
Question 1 - XOR Approximation
We consider the problem of designing a feedforward neural network to approximate the XOR
function. Specifically, for any input points (x1, x2), x1, x2 ∈ {0, 1}, the output of the network
should be approximately equal to x1 ⊕ x2. Suppose the network has two input neurons, one
hidden layer with two neurons, and an output layer with one neuron, as shown in Figure 1.
The activation function for all neurons is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Please provide the specific values for the parameters in your designed network. Demon strate how your network approximates the XOR function (Table 1) by performing forward
propagation on the given inputs (x1, x2), x1, x2 ∈ {0, 1}.
(b) If we need the neural network to approximate the XNOR function (Table 1), how should
we modify the output neuron without altering the neurons in the hidden layer?
x1 x2 x1 ⊕ x2 x1  x2
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1
Table 1: XOR and XNOR Value Table
 !
 "
 #
Figure 1: Network structure and the notation of parameters
COMP3340 Applied Deep Learning The University of Hong Kong
Question 2 - Backpropagation
We consider the problem of the forward pass and backpropagation in a neural network whose
structure is shown in Figure 1. The network parameters is initialized as w1 = 1, w2 = −2,
w3 = 2, w4 = −1, w5 = 1, w6 = 1, b1 = b2 = b3 = 0. The activation function for all neurons
is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Suppose the input sample is (1, 2) and the ground truth label is 0.1. Please compute
the output y of the network.
(b) Suppose we use the Mean Squared Error (MSE) loss. Please compute the loss value for
the sample (1, 2) and its gradient with respect to the network parameters using chain rules.
The final answer should be limited to 3 significant figures.
(c) Suppose we use stochastic gradient descent (SGD) with a learning rate of α = 0.1.
Please specify the parameters of the network after one step of gradient descent, using the
gradient computed in (b). Please also specify the prediction value and the corresponding loss
of the new network on the same input (1, 2).

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

掃一掃在手機打開當前頁
  • 上一篇:被小豬應急強制下款怎么辦?怎么聯系米來花客服?
  • 下一篇:CE860代做、代寫C/C++編程設計
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045