๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
728x90

๐Ÿ“– Theory/AI & ML12

๋”ฅ๋Ÿฌ๋‹์—์„œ์˜ ํ™•๋ฅ  ๋ถ„ํฌ | probability distribution ํ™•๋ฅ  ๋ถ„ํฌ๋Š” ์–ด๋–ค ์‚ฌ๊ฑด์ด ์ผ์–ด๋‚  ๊ฐ€๋Šฅ์„ฑ์„ ๋‚˜ํƒ€๋‚ด๋Š” ์ˆ˜ํ•™์  ๋ชจ๋ธ๋กœ, ํ™•๋ฅ  ๋ณ€์ˆ˜๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ๊ฐ’๋“ค๊ณผ ๊ทธ ๊ฐ’๋“ค์ด ๋‚˜ํƒ€๋‚  ํ™•๋ฅ ์„ ์ •์˜ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ถ„ํฌ๋Š” ์ด์‚ฐ์ ์ธ ๊ฒฝ์šฐ์™€ ์—ฐ์†์ ์ธ ๊ฒฝ์šฐ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ฃผ์‚ฌ์œ„๋ฅผ ๋˜์กŒ์„ ๋•Œ ๋‚˜์˜ค๋Š” ๋ˆˆ์— ๋Œ€ํ•œ ํ™•๋ฅ ๋ณ€์ˆ˜๊ฐ€ ์žˆ์„ ๋•Œ, ๊ทธ ๋ณ€์ˆ˜์˜ ํ™•๋ฅ ๋ถ„ํฌ๋Š” ์ด์‚ฐ๊ท ๋“ฑ๋ถ„ํฌ๊ฐ€ ๋œ๋‹ค. ์ด์‚ฐ ํ™•๋ฅ  ๋ถ„ํฌ(Discrete Probability Distribution) ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ์ด์‚ฐ์ ์ธ ๊ฐ’์„ ๊ฐ€์งˆ ๋•Œ ์‚ฌ์šฉ ๋จ ํ™•๋ฅ  ์งˆ๋Ÿ‰ ํ•จ์ˆ˜(Probability Mass Function, PMF): ๊ฐ ๊ฐ’์— ๋Œ€ํ•œ ํ™•๋ฅ ์„ ๋‚˜ํƒ€๋‚ด๋Š” ํ•จ์ˆ˜ ์˜ˆ์‹œ ์ดํ•ญ ๋ถ„ํฌ, ํฌ์•„์†ก ๋ถ„ํฌ ๋“ฑ ์ฃผ์‚ฌ์œ„์˜ ๋ˆˆ๊ธˆ, ๋™์ „์„ ๋˜์กŒ์„ ๋•Œ์˜ ์•ž๋ฉด, ๋’ท๋ฉด, ๋กœ๋˜์˜ ๋‹น์ฒจ ๋ฒˆํ˜ธ ๋“ฑ ์—ฐ์† ํ™•๋ฅ  ๋ถ„ํฌ(Continuous Proba.. 2023. 12. 9.
Precision์ด ์ค‘์š”ํ•œ ๊ฒฝ์šฐ์™€ Recall์ด ์ค‘์š”ํ•œ ๊ฒฝ์šฐ 2023.12.08 - [๐Ÿ“– Theory/AI & ML] - Precision, Recall, AP (Average Precision) ๊ฐ„๋‹จ ์„ค๋ช… | ๊ฐ์ฒด ๊ฒ€์ถœ ์„ฑ๋Šฅ ์ง€ํ‘œ Precision, Recall, AP (Average Precision) ๊ฐ„๋‹จ ์„ค๋ช… | ๊ฐ์ฒด ๊ฒ€์ถœ ์„ฑ๋Šฅ ์ง€ํ‘œ Object Detection(๊ฐ์ฒด ๊ฒ€์ถœ) ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” Precision(์ •๋ฐ€๋„), Recall(์žฌํ˜„์œจ), ๊ทธ๋ฆฌ๊ณ  Average Precision(AP)๋ฅผ ๊ผญ ์•Œ์•„์•ผ ํ•œ๋‹ค. ML๋ฅผ ๊ณต๋ถ€ํ•˜๋‹ค ๋ณด๋ฉด ํ•œ ๋ฒˆ ์ด์ƒ์€ ๊ณต๋ถ€ํ•˜๋Š” ๊ฐœ๋…์ธ๋ฐ, ๋Š˜ ํ—ท mvje.tistory.com ์ด์ „ ํฌ์ŠคํŒ…์—์„œ Precision, Recall, AP (Average Precision) ์ด๋ผ๋Š” ์„ฑ๋Šฅ ์ง€ํ‘œ๋“ค์— ๋Œ€ํ•ด ์‚ดํŽด๋ดค๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ์—์„œ .. 2023. 12. 8.
Precision, Recall, AP (Average Precision) ๊ฐ„๋‹จ ์„ค๋ช… | ๊ฐ์ฒด ๊ฒ€์ถœ ์„ฑ๋Šฅ ์ง€ํ‘œ Object Detection(๊ฐ์ฒด ๊ฒ€์ถœ) ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” Precision(์ •๋ฐ€๋„), Recall(์žฌํ˜„์œจ), ๊ทธ๋ฆฌ๊ณ  Average Precision(AP)๋ฅผ ๊ผญ ์•Œ์•„์•ผ ํ•œ๋‹ค. ML๋ฅผ ๊ณต๋ถ€ํ•˜๋‹ค ๋ณด๋ฉด ํ•œ ๋ฒˆ ์ด์ƒ์€ ๊ณต๋ถ€ํ•˜๋Š” ๊ฐœ๋…์ธ๋ฐ, ๋Š˜ ํ—ท๊ฐˆ๋ฆฌ๋Š” ๋ถ€๋ถ„์ด ์žˆ๊ธฐ์— ์ •๋ฆฌํ•ด ๋‘๋ ค ํ•œ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ๊ณผ ์ž‘์—…์—์„œ๋Š” Precision, Recall ๋“ฑ์˜ ๊ฐœ๋…์ด ์ค‘์š”ํ•œ๋ฐ, ๊ทธ ์ด์œ ๋Š” ๊ฐ์ฒด ๊ฒ€์ถœ ์„ฑ๋Šฅ์ด ์ข‹์ง€์•Š์„ ๋•Œ ์˜ค๊ฒ€์ถœ์„ ๋งŽ์ด ํ–ˆ์„ ์ˆ˜๋„ ์žˆ๊ณ , ๊ฒ€์ถœ ์ž์ฒด๊ฐ€ ์ž˜ ์•ˆ๋์„ ์ˆ˜๋„(๋ฏธ๊ฒ€์ถœ) ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ด๊ฒŒ ์™œ ์ค‘์š”ํ• ๊นŒ? ๊ฐ์ฒด ๊ฒ€์ถœ ์ž‘์—…์— ๋”ฐ๋ผ ์˜ค๊ฒ€์ถœ์ด ์น˜๋ช…์ ์ธ ๊ฒฝ์šฐ๋„ ์žˆ๊ณ , ๋ฏธ๊ฒ€์ถœ์ด ์น˜๋ช…์ ์ธ ๊ฒฝ์šฐ๋„ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๋‹ค๋ฅธ ์„ค๋ช…๋“ค์„ ๋ณด๋ฉด True positive, False positive,... ๋“ฑ์˜ ๋ณต์žกํ•œ ๊ฐœ.. 2023. 12. 8.
๋จธ์‹ ๋Ÿฌ๋‹, ๋”ฅ๋Ÿฌ๋‹์ด๋ž€? | ๋จธ์‹ ๋Ÿฌ๋‹๊ณผ ๋”ฅ๋Ÿฌ๋‹์˜ ์ฐจ์ด | ML & DL Machine Learning (๋จธ์‹ ๋Ÿฌ๋‹) ๋จธ์‹ ๋Ÿฌ๋‹(Machine Learning)์€ ์ปดํ“จํ„ฐ๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•˜์—ฌ ํŒจํ„ด์„ ์ธ์‹ํ•˜๊ณ , ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธกํ•˜๊ฑฐ๋‚˜ ๋ถ„๋ฅ˜ํ•˜๋Š” ๊ธฐ์ˆ ๋กœ, ํฌ๊ฒŒ ์ง€๋„ํ•™์Šต(Supervised Learning), ๋น„์ง€๋„ํ•™์Šต(Unsupervised Learning), ๊ฐ•ํ™”ํ•™์Šต(Reinforcement Learning)์œผ๋กœ ๊ตฌ๋ถ„๋œ๋‹ค. 1. ์ง€๋„ํ•™์Šต(Supervised Learning) ์ง€๋„ํ•™์Šต์€ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์™€ ์ถœ๋ ฅ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ชจ๋‘ ์ฃผ์–ด์ง€๋Š” ์ƒํ™ฉ์—์„œ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ ์‚ฌ์ด์˜ ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ๋งํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์ƒˆ๋กœ์šด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•Œ, ํ•ด๋‹น ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์ถœ๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธกํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด๋ฉ”์ผ์ด ์ŠคํŒธ์ธ์ง€ ์•„๋‹Œ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฌธ์ œ์—์„œ๋Š” ์ด๋ฉ”์ผ์˜ ๋‚ด์šฉ๊ณผ ์ŠคํŒธ ์—ฌ๋ถ€๋ฅผ ์ž…๋ ฅ ๋ฐ.. 2023. 4. 19.
[DL] CNN์—์„œ Convolutional layer์˜ ๊ฐœ๋…๊ณผ ์˜๋ฏธ | ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง | ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋”ฅ๋Ÿฌ๋‹์—์„œ CNN (Convolutional Neural Network) ์€ ์ฃผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„์„ํ•˜๋Š” ๋ฐ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ANN (Artificial Neural Network) ์ด๋‹ค. ๋ฌผ๋ก  ์š”์ฆ˜์€ ํŠธ๋žœ์Šคํฌ๋จธ ๊ธฐ๋ฐ˜์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ๋งŽ์ด ์‚ฌ์šฉํ•˜์ง€๋งŒ CNN ๋˜ํ•œ ์—ฌ์ „ํžˆ ๋งŽ์ด ์‚ฌ์šฉ๋˜๊ณ  ํŠธ๋žœ์Šคํฌ๋จธ์™€ CNN์˜ ์กฐํ•ฉ์˜ ๋„คํŠธ์›Œํฌ ๋˜ํ•œ ์‹ฌ์‹ฌ์น˜ ์•Š๊ฒŒ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ์ด๋ฒˆ ํฌ์ŠคํŒ…์—์„œ๋Š” CNN์˜ ํ•ต์‹ฌ layer์ธ convolutional layer์˜ ๊ฐœ๋…๊ณผ ์˜๋ฏธ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๊ณ ์ž ํ•œ๋‹ค. * CNN (Convolutional Neural Network) CNN์€ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๊ณ„์ธต์ ์œผ๋กœ ํ•™์Šตํ•˜๋ฉฐ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์˜ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด Convolution, Pooling, Non-linear activation funct.. 2023. 3. 23.
[DL] ๋”ฅ๋Ÿฌ๋‹์—์„œ์˜ Regularization : Weight Decay, Batch Normalization, Early Stopping ๋”ฅ๋Ÿฌ๋‹์—์„œ Regularization์€ ๋ชจ๋ธ์˜ overfitting์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ •ํ•œ ๊ฒƒ์— ๊ทœ์ œ๋ฅผ ํ•˜๋Š” ๋ฐฉ๋ฒ•๋“ค์„ ์ด์นญํ•˜๊ณ , ๋Œ€ํ‘œ์ ์œผ๋กœ ์•„๋ž˜์™€ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์ด ์žˆ๋‹ค. *Overfitting : ๊ธฐ๊ณ„ ํ•™์Šต ๋ชจ๋ธ์—์„œ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋กœ, ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์…‹์— ๊ณผ๋„ํ•˜๊ฒŒ fit๋˜์–ด ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š” ํ˜„์ƒ. Weight Decay - L1, L2 Batch Normalization Early Stopping Weight Decay Neural network์˜ ํŠน์ • weight๊ฐ€ ๋„ˆ๋ฌด ์ปค์ง€๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์˜ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ๋–จ์–ด๋œจ๋ ค overfitting ๋˜๊ฒŒ ํ•˜๋ฏ€๋กœ, weight์— ๊ทœ์ œ๋ฅผ ๊ฑธ์–ด์ฃผ๋Š” ๊ฒƒ์ด ํ•„์š”. L1 regularization, L2 regularization ๋ชจ๋‘ ๊ธฐ์กด Loss fun.. 2022. 3. 23.
728x90