Seonghee Lee


Information Science 
(Data Science +
Interactive Tech)
@ Cornell University





















About me

CV

Contact: 
sl994@cornell.edu


Seunghee Lee



My research aims to design interaction with autonomous systems.
I am interested in human interaction with autonomous vehicles, telepresence and the development of novel sustainable technology

Currently, my work focuses on designing Telepresence robots with Abstract Movements with Professor Francois Guimbretiere at Cornell. Additionally, I am working on creating a interactive fridge with features to help reduce foodwaste. 
Recently I worked on project IEUM, a multipurpose robot envisioned in future transportation technology. I am an undergrad studying Information Science (B.S.) with a concentration in Data Science and Interactive Technology at Cornell University.

Design Sketches︎︎

autonomous vehicles, telepresence, and sustainable technology


About me
LinkedIn
Email
CV 

2020.12.01 - 2021.01.15. 


Non-Verbal Interaction In Autonomous Vehicles

Cochl. X  Mercedes Benz



During my Internship as a Software Enginner at Cochl, (a Machine Learning Company specializing in non-verbal Sound Recognition AI ) I created the car cockpit display of Cochl's Sound Recognition AI integrated in Mercedes Benz car cockpit display. This was a very meaningful experience to be because it first opened my eyes to issues with Human Interaction with Autonomous Systems. After this internship, I gained an interest in studying the development of Human Interaction with Autonomous Technology. 

With the uprise of smart cars and more interactive technology, Benz Daimler was interested in creating a more Emotionally Aware Car. Cochl's Sound AI technology can recognize non-verbal sounds such as sighs, coughs, sirens, and machine malfunctions to recognize non-verbal, more emotionally aware human and environmental states.

Among these features, the ones in the graph below are the ones we were planning to add to Mercedes Benz.



Development


My main job was to create front-end user display for Mercedes Benz cockpit and integrate the frontend with the backend webserver. Messages recieved from the backend SDK would be caught by the webserver and passed onto the frontend to display changes or notify the user. This whole process took about a month and it was an exciting experience for me to create user interaction with AI devices on a smart car display. Below are the features I developed in this software application.


Feature 1  : Harmonizer


The below video is a demonstartion of the Harmonizer feature!







Feature 2  : Emergency




Feature 3  : Secret Language



Feature 4  : Baby Cry






Feature 5  : Cough





Feature 6  : Hand Clap




Feature 7  : Sigh




Feature 8  : Dog Bark




Feature 8  : Animal Game 





These were the features that I made for Sound Recognition AI user display.  For the code, I used javascript with animations and socket.io and docker to listen to messages being sent from the webserver. Due to company privacy issues, I cannot open the full code. But if you have more inquiries on this project, please feel free to contact me anytime at sl994@cornell.edu

This was a very meaningful experience to be because it first opened my eyes to issues with Human Interaction with Autonomous Systems. After this internship, I gained an interest in studying the development of Human Interaction with Autonomous Technology. 










︎Seunghee Lee