Develop new or improve existing sentiment and emotion AI algorithms with our Korean speech sentiment dataset that covers the basic six emotions: anger, fear, joy, love, sadness, and surprise.
Deploy new or improve your English speech sentiment or emotion algorithms in days by using our ready-to-use speech emotion dataset.
The dataset consists of recordings collected from thousands of people speaking Korean. The emotions and sentiment covered are: anger, fear, joy, love, sadness, and surprise.
Furthermore, the recordings are validated by humans and transcripts are available.
Korean language: native and non-native KO.
The speech emotion dataset contains audio clips of people recording themselves speaking with different emotions, up to 15 minutes of speech per person. The speech is captured using mobile phones from a diverse crowd of speakers representing all ages and backgrounds. Because of that, the dataset is perfect for use cases involving mobile devices.
The recordings vary in length with an average of 5-second clips. Furthermore, they are classified by the background noise level, age group, gender, and region. The recordings are transcribed verbatim with speech transcribed as said by the person if spontaneous.
Emotion AI and sentiment are sometimes used interchangeably. Emotion AI is an umbrella term for various algorithms of which sentiment is a part of.