Klasifikasi Suara Aransemen Alat Musik Tiup Menggunakan Metode Reccurent Neural Network (RNN)

Putri, Dwinggrit Oktaviani (2025) Klasifikasi Suara Aransemen Alat Musik Tiup Menggunakan Metode Reccurent Neural Network (RNN). Undergraduate thesis, UPN Veteran Jawa Timur.

[img] Text
Cover.pdf

Download (1MB)
[img] Text (Bab 1)
Bab 1.pdf

Download (135kB)
[img] Text (Bab 2)
Bab 2.pdf
Restricted to Repository staff only until 18 September 2027.

Download (382kB)
[img] Text (Bab 3)
Bab 3.pdf
Restricted to Repository staff only until 18 September 2027.

Download (282kB)
[img] Text (Bab 4)
Bab 4.pdf
Restricted to Repository staff only until 18 September 2027.

Download (534kB)
[img] Text (Bab 5)
bab 5.pdf

Download (110kB)
[img] Text (Daftar Pustaka)
Daftar Pustaka.pdf

Download (90kB)
[img] Text (Lampiran)
Lampiran.pdf
Restricted to Repository staff only

Download (665kB)

Abstract

Wind instruments such as saxophone, clarinet, trumpet, and others have distinct acoustic characteristics, including frequency, amplitude, and waveform, which serve as unique identifiers for each instrument. In the digital era, the classification of musical instruments for song production or arrangement is still largely performed manually. Unlike previous studies that generally focused on musical instrument classification in a broad sense, this research introduces a more specific approach targeting four types of wind instruments, aiming to improve classification accuracy and reduce misclassification among instruments with similar timbres. This study aims to develop a wind instrument classification system using the Recurrent Neural Network (RNN) method. The classification process is based on extracted acoustic features such as Mel-Frequency Cepstral Coefficients (MFCC) and spectrograms or sonograms. Following feature extraction, recurrent cells in the form of LSTM and GRU are employed to capture temporal patterns from sequential data, particularly audio signals. The dataset consists of 1,200 audio files (.wav) representing four wind instruments (trumpet, baritone, mellophone, and tuba) with 300 audio samples for each instrument. The experimental results show that the LSTM model achieved an accuracy of 94%, while the GRU model reached an accuracy of 98%. These findings highlight the effectiveness of RNN, specifically the LSTM and GRU architectures, in classifying wind instruments with high accuracy. This study is expected to serve as a foundation for further development in the field of digital music, particularly in supporting automatic arrangement processes.

Item Type: Thesis (Undergraduate)
Contributors:
ContributionContributorsNIDN/NIDKEmail
Thesis advisorPrasetya, Dwi ArmanNIDN0005128001arman.prasetya.sada@upnjatim.ac.id
Thesis advisorSaputra, Wahyu Syaifullah JauharisNIDN0725088601wahyu.s.j.saputra.if@upnjatim.ac.id
Subjects: Q Science > Q Science (General)
Divisions: Faculty of Computer Science > Departemen of Data Science
Depositing User: Dwinggrit Oktaviani Putri
Date Deposited: 19 Sep 2025 03:17
Last Modified: 19 Sep 2025 03:17
URI: https://repository.upnjatim.ac.id/id/eprint/43744

Actions (login required)

View Item View Item