Computer Science - Sound Publications (50)


Computer Science - Sound Publications

This paper presents a novel deep neural network (DNN) based speech enhancement method that aims to enhance magnitude and phase components of speech signals simultaneously. The novelty of the proposed method is two-fold. First, to avoid the difficulty of direct clean phase estimation, the proposed algorithm adopts real and imaginary (RI) spectrograms to prepare both input and output features. Read More

Cross-modal audio-visual perception has been a long-lasting topic in psychology and neurology, and various studies have discovered strong correlations in human perception of auditory and visual stimuli. Despite works in computational multimodal modeling, the problem of cross-modal audio-visual generation has not been systematically studied in the literature. In this paper, we make the first attempt to solve this cross-modal generation problem leveraging the power of deep generative adversarial training. Read More

In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses. For automatically parsing a spoken utterance, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and word-based prosodic features. We find that different types of acoustic-prosodic features are individually helpful, and together improve parse F1 scores significantly over a strong text-only baseline. Read More

Peer-Led Team Learning (PLTL) is a learning methodology where a peer-leader co-ordinate a small-group of students to collaboratively solve technical problems. PLTL have been adopted for various science, engineering, technology and maths courses in several US universities. This paper proposed and evaluated a speech system for behavioral analysis of PLTL groups. Read More

Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods, ranging from parametric filters to physically-accurate solvers, can simulate reverberation with varying degrees of fidelity. We investigate the effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception, i. Read More

Speaker Identification process is to identify a particular vocal cord from a set of existing speakers. In the speaker identification processes, unknown speaker voice sample targets each of the existing speakers present in the system and gives a predication. The predication may be more than one existing known speaker voice and is very close to the unknown speaker voice. Read More

The automatic speaker identification procedure is used to extract features that help to identify the components of the acoustic signal by discarding all the other stuff like background noise, emotion, hesitation, etc. The acoustic signal is generated by a human that is filtered by the shape of the vocal tract, including tongue, teeth, etc. The shape of the vocal tract determines and produced, what signal comes out in real time. Read More

We present a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Read More

Hidden Markov Models (HMMs) are a ubiquitous tool to model time series data, and have been widely used in two main tasks of Automatic Music Transcription (AMT): note segmentation, i.e. identifying the played notes after a multi-pitch estimation, and sequential post-processing, i. Read More

This paper presents sampling-based speech parameter generation using moment-matching networks for Deep Neural Network (DNN)-based speech synthesis. Although people never produce exactly the same speech even if we try to express the same linguistic and para-linguistic information, typical statistical speech synthesis produces completely the same speech, i.e. Read More

Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. Read More

In this paper, we design a system in order to perform the real-time beat tracking for an audio signal. We use Onset Strength Signal (OSS) to detect the onsets and estimate the tempos. Then, we form Cumulative Beat Strength Signal (CBSS) by taking advantage of OSS and estimated tempos. Read More

End-to-end learning based approaches have been shown to be effective and are giving excellent performance for many systems with less training data. In this work we present an end to end learning approach for novel application of software defined radio (SDR) that utilizes the prior knowledge of transmitted speech message to detect and denoise the audio speech signal from the in-phase and quadrature components of its base band version. Read More

In this paper, we present a time-contrastive learning (TCL)based unsupervised bottleneck (BN) feature extraction method for speech signals with an application to speaker verification. The method exploits the temporal structure of a speech signal and more specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting the signal without using any label information, in contrast to conventional DNN based BN feature extraction methods that train DNNs using labeled data to discriminate speakers or passphrases or phones or a combination of them. We consider different strategies for TCL and its combination with transfer learning. Read More

Being able to predict whether a song can be a hit has impor- tant applications in the music industry. Although it is true that the popularity of a song can be greatly affected by exter- nal factors such as social and commercial influences, to which degree audio features computed from musical signals (whom we regard as internal factors) can predict song popularity is an interesting research question on its own. Motivated by the recent success of deep learning techniques, we attempt to ex- tend previous work on hit song prediction by jointly learning the audio features and prediction models using deep learning. Read More

Generative models in vision have seen rapid progress due to algorithmic improvements and the availability of high-quality image datasets. In this paper, we offer contributions in both these areas to enable similar progress in audio modeling. First, we detail a powerful new WaveNet-style autoencoder model that conditions an autoregressive decoder on temporal codes learned from the raw audio waveform. Read More

Automatically detecting sound units of humpback whales in complex time-varying background noises is a current challenge for scientists. In this paper, we explore the applicability of Convolution Neural Network (CNN) method for this task. In the evaluation stage, we present 6 bi-class classification experimentations of whale sound detection against different background noise types (e. Read More

In this paper, we present MidiNet, a deep convolutional neural network (CNN) based generative adversarial network (GAN) that is intended to provide a general, highly adaptive network structure for symbolic-domain music generation. The network takes random noise as input and generates a melody sequence one mea- sure (bar) after another. Moreover, it has a novel reflective CNN sub-model that allows us to guide the generation process by providing not only 1D but also 2D conditions. Read More

Speech enhancement (SE) aims to reduce noise in speech signals. Most SE techniques focus on addressing audio information only.In this work, inspired by multimodal learning, which utilizes data from different modalities, and the recent success of convolutional neural networks (CNNs) in SE, we propose an audio-visual deep CNN (AVDCNN) SE model, which incorporates audio and visual streams into a unified network model. Read More

A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Building these components often requires extensive domain expertise and may contain brittle design choices. In this paper, we present Tacotron, an end-to-end generative text-to-speech model that synthesizes speech directly from characters. Read More

Automatic Music Transcription (AMT) is one of the oldest and most well-studied problems in the field of music information retrieval. Within this challenging research field, onset detection and instrument recognition take important places in transcription systems, as they respectively help to determine exact onset times of notes and to recognize the corresponding instrument sources. The aim of this study is to explore the usefulness of multiscale scattering operators for these two tasks on plucked string instrument and piano music. Read More

Automatic Music Transcription (AMT) consists in automatically estimating the notes in an audio recording, through three attributes: onset time, duration and pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular for this task. PLCA is a spectrogram factorization method, able to model a magnitude spectrogram as a linear combination of spectral vectors from a dictionary. Read More

Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. Read More

In this study we present a Deep Mixture of Experts (DMoE) neural-network architecture for single microphone speech enhancement. By contrast to most speech enhancement algorithms that overlook the speech variability mainly caused by phoneme structure, our framework comprises a set of deep neural networks (DNNs), each one of which is an 'expert' in enhancing a given speech type corresponding to a phoneme. A gating DNN determines which expert is assigned to a given speech segment. Read More

In this paper, we present a transfer learning approach for music classification and regression tasks. We propose to use a pretrained convnet feature, a concatenated feature vector using activations of feature maps of multiple layers in a trained convolutional network. We show that how this convnet feature can serve as a general-purpose music representation. Read More

This paper shows how a time series of measurements of an evolving system can be processed to create an inner time series that is unaffected by any instantaneous invertible, possibly nonlinear transformation of the measurements. An inner time series contains information that does not depend on the nature of the sensors, which the observer chose to monitor the system. Instead, it encodes information that is intrinsic to the evolution of the observed system. Read More

This paper presents a statistical method for music transcription that can estimate score times of note onsets and offsets from polyphonic MIDI performance signals. Because performed note durations can deviate largely from score-indicated values, previous methods had the problem of not being able to accurately estimate offset score times (or note values) and thus could only output incomplete musical scores. Based on observations that the pitch context and onset score times are influential on the configuration of note values, we construct a context-tree model that provides prior distributions of note values using these features and combine it with a performance model in the framework of Markov random fields. Read More

Deep learning techniques have been used recently to tackle the audio source separation problem. In this work, we propose to use deep convolution denoising auto-encoders (CDAEs) for monaural audio source separation. We use as many CDAEs as the number of sources to be separated from the mixed signal. Read More

In this paper we analyze the gate activation signals inside the gated recurrent neural networks, and find the temporal structure of such signals is highly correlated with the phoneme boundaries. This correlation is further verified by a set of experiments for phoneme segmentation, in which better results compared to standard approaches were obtained. Read More

In this paper, we propose a novel technique for direct recognition of multiple speech streams given the single channel of mixed speech, without first separating them. Our technique is based on permutation invariant training (PIT) for automatic speech recognition (ASR). In PIT-ASR, we compute the average cross entropy (CE) over all frames in the whole utterance for each possible output-target assignment, pick the one with the minimum CE, and optimize for that assignment. Read More

We propose a multi-objective framework to learn both secondary targets not directly related to the intended task of speech enhancement (SE) and the primary target of the clean log-power spectra (LPS) features to be used directly for constructing the enhanced speech signals. In deep neural network (DNN) based SE we introduce an auxiliary structure to learn secondary continuous features, such as mel-frequency cepstral coefficients (MFCCs), and categorical information, such as the ideal binary mask (IBM), and integrate it into the original DNN architecture for joint optimization of all the parameters. This joint estimation scheme imposes additional constraints not available in the direct prediction of LPS, and potentially improves the learning of the primary target. Read More

With ever-increasing number of car-mounted electric devices and their complexity, audio classification is increasingly important for the automotive industry as a fundamental tool for human-device interactions. Existing approaches for audio classification, however, fall short as the unique and dynamic audio characteristics of in-vehicle environments are not appropriately taken into account. In this paper, we develop an audio classification system that classifies an audio stream into music, speech, speech+music, and noise, adaptably depending on driving environments including highway, local road, crowded city, and stopped vehicle. Read More

Environmental sound detection is a challenging application of machine learning because of the noisy nature of the signal, and the small amount of (labeled) data that is typically available. This work thus presents a comparison of several state-of-the-art Deep Learning models on the IEEE challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge task and data, classifying sounds into one of fifteen common indoor and outdoor acoustic scenes, such as bus, cafe, car, city center, forest path, library, train, etc. In total, 13 hours of stereo audio recordings are available, making this one of the largest datasets available. Read More

Dance Dance Revolution (DDR) is a popular rhythm-based video game. Players perform steps on a dance platform in synchronization with music as directed by on-screen step charts. While many step charts are available in standardized packs, users may grow tired of existing charts, or wish to dance to a song for which no chart exists. Read More

Signal amplitude envelope allows to obtain information on the signal features for different applications. It is commonly agreed that the envelope is a signal that varies slowly and it should pass the prominent peaks of the data smoothly. It has been widely used in sound analysis and also in different variables of physiological data for animal and human studies. Read More

The focus of this work is to study how to efficiently tailor Convolutional Neural Networks (CNNs) towards learning timbre representations from log-mel magnitude spectrograms. We first review the trends when designing CNN architectures. Through this literature overview we discuss which are the crucial points to consider for efficiently learning timbre representations using CNNs. Read More

The term gestalt has been widely used in the field of psychology which defined the perception of human mind to group any object not in part but as a unified whole. Music in general is polytonic i.e. Read More

Despite the significant progress made in the recent years in dictating single-talker speech, the progress made in speaker independent multi-talker mixed speech separation and tracing, often referred to as the cocktail-party problem, has been less impressive. In this paper we propose a novel technique for attacking this problem. The core of our technique is permutation invariant training (PIT), which aims at minimizing the source stream reconstruction error no matter how labels are ordered. Read More

Audio tagging aims to perform multi-label classification on audio chunks and it is a newly proposed task in the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. This task encourages research efforts to better analyze and understand the content of the huge amounts of audio data on the web. The difficulty in audio tagging is that it only has a chunk-level label without a frame-level label. Read More

Deep learning models (DLMs) are state-of-the-art techniques in speech recognition. However, training good DLMs can be time consuming especially for production-size models and corpora. Although several parallel training algorithms have been proposed to improve training efficiency, there is no clear guidance on which one to choose for the task in hand due to lack of systematic and fair comparison among them. Read More

Psychiatric illnesses are often associated with multiple symptoms, whose severity must be graded for accurate diagnosis and treatment. This grading is usually done by trained clinicians based on human observations and judgments made within doctor-patient sessions. Current research provides sufficient reason to expect that the human voice may carry biomarkers or signatures of many, if not all, these symptoms. Read More

For enhancing noisy signals, pre-trained single-channel speech enhancement schemes exploit prior knowledge about the shape of typical speech structures. This knowledge is obtained from training data for which methods from machine learning are used, e.g. Read More

The field of speech recognition is in the midst of a paradigm shift: end-to-end neural networks are challenging the dominance of hidden Markov models as a core technology. Using an attention mechanism in a recurrent encoder-decoder architecture solves the dynamic time alignment problem, allowing joint end-to-end training of the acoustic and language modeling components. In this paper we extend the end-to-end framework to encompass microphone array signal processing for noise suppression and speech enhancement within the acoustic encoding network. Read More

We introduce in this work an efficient approach for audio scene classification using deep recurrent neural networks. A scene audio signal is firstly transformed into a sequence of high-level label tree embedding feature vectors. The vector sequence is then divided into multiple subsequences on which a deep GRU-based recurrent neural network is trained for sequence-to-label classification. Read More

Acoustic beamforming with a microphone array represents an adequate technology for remote acoustic surveillance, as the system has no mechanical parts and it has moderate size. However, in order to accomplish real implementation, several challenges need to be addressed, such as array geometry, microphone characteristics, and the digital beamforming algorithms. This paper presents a simulated analysis on the effect of the array geometry in the beamforming response. Read More

Bird sounds possess distinctive spectral structure which may exhibit small shifts in spectrum depending on the bird species and environmental conditions. In this paper, we propose using convolutional recurrent neural networks on the task of automated bird audio detection in real-life environments. In the proposed method, convolutional layers extract high dimensional, local frequency shift invariant features, while recurrent layers capture longer term dependencies between the features extracted from short time frames. Read More

This study proposes a fully convolutional network (FCN) model for raw waveform-based speech enhancement. The proposed system performs speech enhancement in an end-to-end (i.e. Read More

Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. Read More

Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains. This approach was applied to musical signals as well but has been not fully explored yet. To this end, we propose sample-level deep convolutional neural networks which learn representations from very small grains of waveforms (e. Read More

Sound and vision are the primary modalities that influence how we perceive the world around us. Thus, it is crucial to incorporate information from these modalities into language to help machines interact better with humans. While existing works have explored incorporating visual cues into language embeddings, the task of learning word representations that respect auditory grounding remains under-explored. Read More