새롭게 알게 된 내용

선형회귀

경사하강법

고민한 내용

[선형회귀 중] 상관계수를 결정하는 방법에는 크게 세 가지, [결정계수, 피어슨상관계수, 스피어만 상관계수]가 있는데 가장 잘 사용되는게 피어슨상관계수이다. 왜?

<aside> <img src="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" alt="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" width="40px" /> 피어슨 상관 계수는 그 단순성, 계산의 용이성, 해석의 명확성 때문에 가장 널리 사용된다. 이는 특히 데이터가 연속적이고 선형 관계를 가질 때 유용하다. 그러나 비선형 관계나 비모수적 데이터가 주어질 때는 스피어만 상관 계수나 결정계수 등 다른 지표가 더 적합할 수 있다.

</aside>

[선형회귀 중] 회귀 문제의 경우에는 주로 평균 제곱 오차(MSE)가 사용된다. 왜?

<aside> <img src="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" alt="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" width="40px" /> 수식적으로 단순히 '오차 = 실제값 - 예측값' 이라고 정의한 후에 모든 오차를 더하면 음수 오차도 있고, 양수 오차도 있으므로 오차의 절대적인 크기를 구할 수가 없다. 그래서 모든 오차를 제곱하여 더하는 방법을 사용한다.

</aside>

[경사하강법 중] PyTorch에서 optimizer.zero_grad()를 해준다. 왜?

<aside> <img src="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" alt="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" width="40px" /> PyTorch에서는 gradients 값들을 추후에 backward를 해줄 때 계속 더해주기 때문에 항상 backpropagation을 하기 전에 gradients를 zero로 만들어주고 시작해야한다.

[과제02] coef_ intercept_

<aside> <img src="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" alt="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" width="40px" /> coef_ 선형 회귀 모델에서 학습된 가중치(계수)를 저장하는 변수

</aside>

<aside> <img src="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" alt="data:image/svg+xml;charset=utf-8;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NCIgaGVpZ2h0PSI0NCIgZmlsbD0iI2U3YmNiNSIgY2xhc3M9ImJpIGJpLWNoYXQtZG90cy1maWxsIiB2aWV3Qm94PSIwIDAgMTYgMTYiIGlkPSJpY29uLWNoYXQtZG90cy1maWxsLTM0NCI+PHBhdGggZD0iTTE2IDhjMCAzLjg2Ni0zLjU4MiA3LTggN2E5LjA2IDkuMDYgMCAwIDEtMi4zNDctLjMwNmMtLjU4NC4yOTYtMS45MjUuODY0LTQuMTgxIDEuMjM0LS4yLjAzMi0uMzUyLS4xNzYtLjI3My0uMzYyLjM1NC0uODM2LjY3NC0xLjk1Ljc3LTIuOTY2Qy43NDQgMTEuMzcgMCA5Ljc2IDAgOGMwLTMuODY2IDMuNTgyLTcgOC03czggMy4xMzQgOCA3ek01IDhhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem00IDBhMSAxIDAgMSAwLTIgMCAxIDEgMCAwIDAgMiAwem0zIDFhMSAxIDAgMSAwIDAtMiAxIDEgMCAwIDAgMCAyeiI+PC9wYXRoPjwvc3ZnPg==" width="40px" /> intercept_ 선형 회귀 모델에서 학습된 편향(절편) 값을 저장하는 변수 단일 값으로, 모델의 모든 예측에 동일하게 적용되는 보정값

</aside>

params = list(model.parameters())
b0 = params[1]  # params[1]이 편향
b1 = params[0]  # params[0]이 가중치