본문 바로가기

분류 전체보기195

1. Signals, System Properties Signal: A function of one or more variables Unit Impulse: \( \delta[n] = (n == 0)? 1: 0 \) Unit Step: \( u[n] = (n >= 0)? 1 : 0 \) \( \delta[n] = u[n] - u[n - 1] \) \( u[n] = \sum_{k = 0}^\infty \delta[n - k] \) System: A process by which input signals are transformed to output signals \( x[n] \) -> Discrete-Time System -> \( y[n] \) \( x(t) \) -> Continuous-Time System -> \( y(t) \) A system is.. 2022. 7. 21.
1. ML Strategy Introduction to ML Strategy Why ML Strategy? Ideas to improve ML model: collect more data, collect more diverse training set, train algorithm longer with gradient descent, try adam instead of gradient descent, try bigger network, try smaller network, try dropout, add l2 regularization, modify network architecture, ... ML problem을 분석하여 가장 효과적일 것으로 보이는 아이디어를 시도하는 것이 당연하게도 좋을 것이다. (ML strategy) Ort.. 2022. 7. 20.
3. Hyperparameter Tuning, Batch Normalization and Programming Frameworks Hyperparameter Tuning Tuning Process Try random values for hyperparameter, Don't use a grid Coarse to fine Using an Appropriate Scale to pick Hyperparameters For example, check \( \alpha \) in log scale For \( \beta \), consider the value of \( 1 - \beta \) Hyperparameters Tuning in Practice: Pandas vs. Cavier Babysitting one model vs Training many models in parallel Batch Normalization Normaliz.. 2022. 7. 15.
2. Optimization Algorithms Mini-batch Gradient Descent Batch Gradient Descent - Training once by m training sets Mini-batch Gradient Descent - Training t times by m / t training sets Understanding Mini-batch Gradient Descent Batch Gradient Descent - mini-batch size is m Stocastic Gradient Descent - mini-batch size is 1 In practice, mini-batch size is in-between 1 and m Exponentially Weighted Averages \( v_t \) = \( \beta .. 2022. 7. 14.