캐글

[Kaggle Extra Study] 4. Curse of Dimensionality 차원의 저주

dongsunseng 2024. 10. 23. 01:52
반응형

What is Curse of Dimensionality

Solving machine learning problems, we often face problems on train data having excessive number of features. Due to this problem, training speed gets slower and improving model performance proves much more challenging in this case.This kind of problem is called "Curse of Dimensionality".

 

Obviously, there are methods we can deal with this curse: simply reducing the number of features by reducing the dimension. For example, we can reduce unnecessary features by analyzing the data or make 2 adjacent data into 1

 

Since reducing dimension also reduces data, the performance of the model can be affected. Thus, we should check if the reducing dimension is worth the difference in terms of the model performance. Sometimes, the performance will get a little worse but training speed will be much faster. Besides, the performance can be even better even though we reduced dimension due to the reduction of unnecessary data or noise. (However, it is known that dimension reduction usually just make the train speed faster)

 

In addition to that, dimension reduction makes plotting graphs much easier resulting in easily discovering critical characteristic of the data also easier.

 

Some people might think "Why don't you just gather more training data then?". That is of course possible but the amount of data to make the performance better gets exponentially large as the dimension gets higher. That's why we should apply special techniques as following.

 

How to reduce dimension

There are two big approaches to dimensionality reduction: 

  1. Projection
    • Linearly projects high-dimensional data onto a lower-dimensional subspace
    • Key techniques:
      • PCA(Principal Component Analysis): Finds orthogonal axes that maximize data variance
        • Linear dimensionality reduction method -> Can only capture linear relationships
        • Efficient Computation + Easy Interpretation
      • Kernel PCA: Uses kernel trick to deal with nonlinear relationships 
        • Nonlinear dimensionality reduction method -> Can also capture nonlinear relationships
        • Kernel trick:
          1. Nonlinearly maps data to high-dimensional feature space
          2. Performs PCA in mapped space
      • LDA(Linear Discriminant Analysis): Finds axes that maximize between-class variance while minimizing within-class variance
        • Suitable for supervised learning
  2. Manifold Learning
    • Learns the low-dimensional nonlinear manifold where the data lies
    • Aims to preserve local characteristics
    • Key techniques:
      • LLE(Locally Linear Embedding): Preserves local linear relationships of each data 
      • t-SNE(t-Distributed Stochastic Neighbor Embedding): Reduces dimensions while preserving similarity between data
        • Highly effective for visualization

 

 

Reference

Hands-On Machine Learning Chap 8. Dimensionality Reduction

 

핸즈온 머신러닝 | 오렐리앙 제롱 - 교보문고

핸즈온 머신러닝 | 컬러판으로 돌아온 아마존 인공지능 분야 부동의 1위 도서이 책은 지능형 시스템을 구축하려면 반드시 알아야 할 머신러닝, 딥러닝 분야 핵심 개념과 이론을 이해하기 쉽게

product.kyobobook.co.kr

 

 

성공한 자의 과거는 비참할수록 아름답다.
반응형