Abstract
As an effective dimensionality reduction method, Same Degree Distribution (SDD) has been demonstrated to be able to maintain better data structure than other dimensionality reduction methods. Hence, tuning the degree of degree-distribution makes SDD a less costly method than other methods that require tuning the number of neighbours or perplexity. Although these advantages, SDD is still an expensive method compared with parameter-free methods such as PCA and MDS. A parameter-free SDD is proposed based on standard SDD, but it has two main differences: 1) it does not require tuning the degree of degree-distribution in the entire range from 1 to 15, but only uses degree 1; and 2) it re-scales the pairwise distances in the range [0, 2] instead of range [0, 1]. A theoretical analysis is presented to prove the better performance of parameter-free SDD. This paper also proposes a parametric version of SDD using a deep neural network approach to learn the mapping based on the samples of the original data and their corresponding embedded representations in a low dimensional space. Comparative experiments have been undertaken with SDD and other methods to demonstrate the effectiveness of the parametric SDD.
Original language | English |
---|---|
Article number | 120030 |
Pages (from-to) | 120030 |
Journal | Information Sciences |
Volume | 661 |
DOIs | |
Publication status | Published - 22 Dec 2023 |
Keywords
- computational time
- dimensionality reduction techniques
- high dimensional data
- structure capturing