-
PyPI registrationData 2022. 8. 17. 17:35
You can follow the following webpage: https://bit.ly/3QP48AN. The PDF version of the webpage is attached in case the webpage is closed. PyPI 등록, 나만의 패키지 만들기 나만의 명령어를 만들어보자. hey insutance! velog.io Few Notes You can skip entry_points. You can list install_requires by 1) creating requirements.txt, 2) adding the following code block in setup.py. - NB! you should change the version requirement for l..
-
Export the environment specification with Anaconda (environment.yml)Data/Machine learning 2022. 8. 11. 18:21
Similar to "requirements.txt", Anaconda provides an exporting option for its environment (i.e., installed libraries) as "environment.yml". You can simply run conda env create -f environment.yml Then, you'd get something like: Reference: https://www.anaconda.com/blog/moving-conda-environments
-
-
Recall and PrecisionData/Machine learning 2022. 1. 10. 18:16
Example: https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall Classification: Precision and Recall | Machine Learning Crash Course | Google Developers Google is committed to advancing racial equity for Black communities. See how. Send feedback Classification: Precision and Recall Estimated Time: 9 minutes Precision Precision attempts to answer the follo..
-
Vision Transformer (ViT) Study MaterialData/Machine learning 2021. 12. 5. 18:48
1. https://youtu.be/j6kuz_NqkG0 2. https://youtu.be/TrdevFK_am4 3. What is the Class Token? One of the interesting things about the Vision Transformer is that the architecture uses Class Tokens. These Class Tokens are randomly initialized tokens that are prepended to the beginning of your input sequence. What is the reason for this Class Token and what does it do? Note that the Class Token is ra..
-
Transformer Study MaterialsData/Machine learning 2021. 12. 4. 16:24
1. https://youtu.be/z1xs9jdZnuY 2. https://youtu.be/4Bdc55j80l8 About Positional Encoding https://kazemnejad.com/blog/transformer_architecture_positional_encoding/ Transformer Architecture: The Positional Encoding - Amirhossein Kazemnejad's Blog Transformer architecture was introduced as a novel pure attention-only sequence-to-sequence architecture by Vaswani et al. Its ability for parallelizabl..