///
Search

Audio Transfer learning

โ€ข
densenet
Efficient Classification of Environmental Sounds through Multiple Features Aggregation and Data Enhancement Techniques for Spectrogram Images
Over the past few years, the study of environmental sound classification (ESC) has become very popular due to the intricate nature of environmental sounds. This paper reports our study on employing various acoustic features aggregation and data enhancement approaches for the effective classification of environmental sounds. The proposed data augmentation techniques are mixtures of the reinforcement, aggregation, and combination of distinct acoustics features. These features are known as spectrogram image features (SIFs) and retrieved by different audio feature extraction techniques. All audio features used in this manuscript are categorized into two groups: one with general features and the other with Mel filter bank-based acoustic features. Two novel and innovative features based on the logarithmic scale of the Mel spectrogram (Mel), Log (Log-Mel) and Log (Log (Log-Mel)) denoted as L2M and L3M are introduced in this paper. In our study, three prevailing ESC benchmark datasets, ESC-10, ESC-50, and Urbansound8k (Us8k) are used. Most of the audio clips in these datasets are not fully acquired with sound and include silence parts. Therefore, silence trimming is implemented as one of the pre-processing techniques. The training is conducted by using the transfer learning model DenseNet-161, which is further fine-tuned with individual optimal learning rates based on the discriminative learning technique. The proposed methodologies attain state-of-the-art outcomes for all used ESC datasets, i.e., 99.22% for ESC-10, 98.52% for ESC-50, and 97.98% for Us8k. This work also considers real-time audio data to evaluate the performance and efficiency of the proposed techniques. The implemented approaches also have competitive results on real-time audio data.
์œ„ ๋…ผ๋ฌธ์—์„œ๋Š” DenseNet-161 ๋ชจ๋ธ์„ ์„ ํƒํ•˜๊ณ , ์ด ๋ชจ๋ธ์„ ESC-10, ESC-50, Urbansound8k ๋ฐ์ดํ„ฐ์…‹์—์„œ ์ „์ด ํ•™์Šต์„ ํ†ตํ•ด Fine-tuning ํ•˜์˜€์Šต๋‹ˆ๋‹ค. Fine-tuning์€ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ƒˆ๋กœ์šด ์ž‘์—…์— ๋งž๊ฒŒ ์กฐ์ •ํ•˜๋Š” ๊ธฐ์ˆ ์ž…๋‹ˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” Fine-tuning์‹œ ํ•™์Šต๋ฅ  ์กฐ์ • ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ๊ฐœ๋ณ„์ ์ธ ์ตœ์  ํ•™์Šต๋ฅ ์„ ์„ ํƒํ•˜์—ฌ ์ ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” Mel filter bank-based acoustic features ์™€ General features๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ๋‹ค์–‘ํ•œ acoustic feature๋ฅผ ์ ์šฉํ•˜์˜€์œผ๋ฉฐ, ์ด๋ฅผ Multiple Features Aggregation ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ์ ์ ˆํ•˜๊ฒŒ ๊ฒฐํ•ฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋‹ค์–‘ํ•œ ๊ธฐ์ˆ ๋“ค์˜ ์กฐํ•ฉ์œผ๋กœ DenseNet-161 ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œ์ผฐ๋‹ค๋Š” ๊ฒƒ์ด ์ด ๋…ผ๋ฌธ์—์„œ ์ œ์‹œํ•œ ์ฃผ์š” ๊ฒฐ๊ณผ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค.
โ€ข
Audio feature fusion
file:///Users/yoohajun/Downloads/Environment_Sound_Classification_Based_on_Visual_M.pdf
Environment Sound Classification Based on Visual Multi-Feature Fusion and GRU-AWS
Efficient Classification of Environmental Sounds through Multiple Features Aggregation and Data Enhancement Techniques for Spectrogram Images
Over the past few years, the study of environmental sound classification (ESC) has become very popular due to the intricate nature of environmental sounds. This paper reports our study on employing various acoustic features aggregation and data enhancement approaches for the effective classification of environmental sounds. The proposed data augmentation techniques are mixtures of the reinforcement, aggregation, and combination of distinct acoustics features. These features are known as spectrogram image features (SIFs) and retrieved by different audio feature extraction techniques. All audio features used in this manuscript are categorized into two groups: one with general features and the other with Mel filter bank-based acoustic features. Two novel and innovative features based on the logarithmic scale of the Mel spectrogram (Mel), Log (Log-Mel) and Log (Log (Log-Mel)) denoted as L2M and L3M are introduced in this paper. In our study, three prevailing ESC benchmark datasets, ESC-10, ESC-50, and Urbansound8k (Us8k) are used. Most of the audio clips in these datasets are not fully acquired with sound and include silence parts. Therefore, silence trimming is implemented as one of the pre-processing techniques. The training is conducted by using the transfer learning model DenseNet-161, which is further fine-tuned with individual optimal learning rates based on the discriminative learning technique. The proposed methodologies attain state-of-the-art outcomes for all used ESC datasets, i.e., 99.22% for ESC-10, 98.52% for ESC-50, and 97.98% for Us8k. This work also considers real-time audio data to evaluate the performance and efficiency of the proposed techniques. The implemented approaches also have competitive results on real-time audio data.