← Back to Home

what i'm working on

undergrad thesis

Central bank announcements usually carry little surprises in terms of policy, but we listen to them to update our internal outlooks on what we think future decisions are going to be. As humans listening to other humans, we not only pay attention to the content of their words (ie a “hawkish” or a “dovish” speech) but to *how* those words are delivered. Like in any conversation, we subconsciously interpret vocal cues—tone, pacing, nervousness, stuttering, fidgeting and other emotional markers.

But we tend to be extremely biased when evaluating these things, easily swayed by charisma, reputation or appearance.

What if you could train a model to quantify emotional expressions of central bankers in announcements and see how these variables affect markets? Can subtle vocal shifts signal underlying sentiment and influence asset prices?

To investigate this, I’m building a dataset of audio tracks of FOMC announcements and using open source models like Emotion2vec to recognize emotion vectors and look for correlations to market reactions.

I'll upload it here once its finished.