-
Deep Neural Networks with PyTorch, COURSERANote-taking/Courses 2021. 1. 20. 09:02
1.1 Tensors 1D a = torch.tensor([0,1,2,3,4]) a.dtype -> torch.int64 a.type() -> torch.LongTensor a = torch.tensor([0., 1., 2., 3., 4.], dtype=torch.int32) a.dtype = torch.int32 a = torch.FloatTensor([0, 1, 2, 3, 4]) -> tensor([0., 1., 2., 3., 4.]) a = a.type(torch.FloatTensor) a.size() -> torch.Size(5) a.ndimension() -> 1 a_col = a.view(5, 1) = a.view(-1, 1) // .view(..) is the same as .reshape(..
-
Head First StatisticsNote-taking/Books 2021. 1. 16. 11:25
Frequency Density = Frequency / Group width Cumulative Frequency Mean $$ \mu = \frac{\sum_{i=1} x_i }{ n } $$ $$ \mu = \frac{\sum_{i=1} f_i, x_i }{ \sum_{i=1} f_i } $$ where $f_i$ denotes frequency of $x_i$ (e.g. $x_i$ can denote a certain age, and $f_i$ is #people belonging to that age). Skewed Data When outliers pull the mean of the data to the left or right: Average There are several kinds of..
-
Practical Statistics for Data ScientistsNote-taking/Books 2021. 1. 14. 17:17
Ordinal: Categorical data that has an explicit ordering Records: A row in the table (dataset) Rectangular data: Data in a table form Non-Rectangular data structure: 1) Spatial data (it stores data indexed in some way by their spatial location), 2) Graph data structure Trimmed mean: It's a compromise between median and mean. The front and back ends are dropped (trimmed) by $n%$ (usually, $10%$) a..
-
An Introduction to Statistical Learning with Applications in RNote-taking/Books 2021. 1. 12. 17:04
Notation and Simple Matrix Algebra $n$ to represent the number of distinct data points. $p$ denotes the number of variables. $\boldsymbol{X}$ denotes a $n \times p$ matrix whose $(i, j)$-th element is $x_{ij}$. That is, $$ \boldsymbol{X} = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1p} \\ x_{21} & x_{22} & \cdots & x_{2p} \\ \vdots & \vdots & \ddots & \vdots \\ x_{n1} & x_{n2} & \cdots & x_{n..