Seonghee Lee


Information Science 
(Data Science +
Interactive Tech)
@ Cornell University





















About me

CV

Contact: 
sl994@cornell.edu


Seunghee Lee



My research aims to design interaction with autonomous systems.
I am interested in human interaction with autonomous vehicles, telepresence and the development of novel sustainable technology

Currently, my work focuses on designing Telepresence robots with Abstract Movements with Professor Francois Guimbretiere at Cornell. Additionally, I am working on creating a interactive fridge with features to help reduce foodwaste. 
Recently I worked on project IEUM, a multipurpose robot envisioned in future transportation technology. I am an undergrad studying Information Science (B.S.) with a concentration in Data Science and Interactive Technology at Cornell University.

Design Sketches︎︎

autonomous vehicles, telepresence, and sustainable technology


About me
LinkedIn
Email
CV 




Enhanced Eye Movement on a Telepresence Robot 





This is the design process of an enhanced eye movement. I am following the design process and suggestions outlined by 
“Designing Robots with Movement in Mind“ (Guy Hoffman and Wendy Ju. 2014.) 



Paper Draft : https://docs.google.com/document/d/1Ohf1UVxNk75PaBIZgdGvESnKLLCGf1EPwfArmaGeig8/edit 

Project Explanation


A long standing challenge in video-mediated communication systems is to represent a remote participant’s gaze direction in local environments correctly. To address this issue, we have developed a robot with enhanced eye movement to communicate effectively about where the remote participant is looking.


 


In the design of telepresence robots, movement usually comes in the form of a
This project attempts to enhance the feeling of presence through expressing enhanced eye movement on a telepresence robot. 

Head movements have been usually expressed in telepresence robot because they can communicate where the attention of a person is. However, in the case of a telepresence robot without a video, the head movement alone is not enough to communicate where the attention of a person is. 

This project attempts to express enhanaced eye movement on a telepresence robot to increase spatial presence and show the exact focus of attention.




Telepresence Robots without a Screen


My research started from looking at telepresence robots without a screen. A few telepresence robots without a screen have been started to be used commercially. 




The robot cafe telepresence robot for physically disabled people to work remotely. The AV1 robot that allows children with illness to participate in classroom environments.



Inspiration Sources


The shape of these robots usually come in abstract forms so that it can be accessible for all people. However, from studying the interactions that these robots have, I became interested in ways that we could show even more expressive movement to increase spatial presence and to communicate the focus of attention. Below are some sources of inspiration. 

 


Adelino Robot by Tiago Riebiero, Luna Robot by Robert Wuss, The Greeting Machine and the Kip Robot by miLab




Design Goal 


Based on these designs, my design goal is to create enhanced eye movement on the Blossom Robot for telepresence.
The Blossom Robot currently maps the head movement of a person. My goal here is to study how I can add enhanced eye movement in an abstract way here.








Skeleton Prototype



By studying multiple design mechanisms of Robot arms and robots eye-movement. I came up with a skeleton prototype. 
This design uses 3 servo motors. The motor on the bottom controls the left-right movement of the eye and the second servo motor controls up down movement. I added an additional motor at the top to express even more subtle movements of the eye. 

The movement of this skeleton prototype is currently run in a Wizard of Oz mode using two joysticks. 





Here are some videos on the movements expressed by this skeleton prototype. 







Prototype 2 : More Specific Design


After testing the first prototype, I decided that using two servos would be enough. The first servo at the bottom would control the eye’s left and right movements. The second servo would control the eye’s up down movements. 
Like the greeting machine, a neodymium magnet would be attached to the top of the bar attached to the second servo . The 3D printed sphere would also have a neodymium magnet inside it causing it to move. 








The final look of the robot would look like this. The head of the Blossom robot would be replaced by a 3D printed head. Inside of the head, the mechanism for moving the eye would be placed. On the outside it would look like an eye was moving around the head.


3D printing materials for the second prototype

 


Testing the prototype on the Blossom Robot.
I placed the prototype on the Blossom Robot to check if the head would hold the prototype.
I would have to think of a way to hold not only the prototype but also the uno board and a battery.



Sketch of Experiment





Thought the experiment condition is not complete at the moment, the plans are like this. The researcher will be looking down at the acitivity area and the system will capture the researcher’s eye movemnets. Then, on the user’s side, the eye movements of the researcher will be translated into the enhanced eyemovement robot. 



Prototype 3 : Added Lever 


After testing with prototype 2, I found that the ball for the eye should be bigger in order to have impact. Additionally, for smoother movement, I have decieded to use a for shape to attach to the second servo motor. 
This is the sketch, 3d printed elements and the final look. 



















Overall Findings: 

1.   Dome head ? Remove the head shape and just use the body?
   - If I use the dome head as the base of the ball, then the movement of the eye is not visible if I show the entire body of the movement, I can make it more expressive than it currently is.


2.  Magnet  should be placed inside the ball. 
   - Placing the magnet outside causes it to fall off continuously need to buy sphere magnets. 

3.  Speed of the eye-movement
    -  We might need one more variable to measure the speed of the eye movements. Should we account for this variable? If so, how? 

4.  Need Larger Magnet for base too.



Next Steps 




EyeTracking Software Testing


Using an online eye-tracking software (https://github.com/stepacool/Eye-Tracker) I printed out the x,y coordinates of an eye-tracking system. 





However the eye tracking software in this github library is not that accurate. I will have to look into the code to see if I can make it a bir more accurate but right now the numbers are a bit all over the place.




When I look way to far down, it stops tracking my eye movement. 


The specific documentation of how this code works is here : https://medium.com/@stepanfilonov/tracking-your-eyes-with-python-3952e66194a6 

Looking at the documentation it seems that if I adjust the threshold or try in a different lighting, I could get different values.

I am thinking of using either just the left or right eye data to get the x, y coordinates


Connecting Eye Tracker to Servo


From the youtube tutorial ( https://www.youtube.com/watch?v=tJaXkdNaCQw ),   I learned that I can connect the servo movement to python using the firmata library.  I imported the pyfirmata library and was able to move the servo motors using python with this code.  



However, when I try to connect this to the eyetracking code, I am getting this error.  I’m working on this right now ;) 






I have also tested this gazetracking library: https://github.com/antoinelame/GazeTracking. Though I was able to get the x,y coordinates, the library did not work if I looked excessively up or down. Additionally, I had to have a fixed head position in order to get accurate coordinates. 


.    
.          

Other Solutions : Tobii Eye Tracker , Manual Control


The tobii eye tracker gives the x,y,z position of the eye on the screen. The z coordinate is the placement of the user’s eye from the screen. The python SDK documentation for the eye tracker. (https://developer.tobiipro.com/python/python-step-by-step-guide.html




Additionally, the LUNA eye tracking robot seems to be made from the Tobii eyetracker! I have contacted the author of the Luna eyetracking robot so maybe I can get help from him regarding how I can make it. 

http://www.interactivearchitecture.org/__trashed-10.html 


Another solution is to use a manual control system of the eye tracker. I can use the python code to manipulate where I look at the screen when I am talking to the person, or find a way to use joysticks, keypads to manipulate the movement.


EyeTracking Software - Tobii Eyetracker  


We will use the Tobii eyetracker for this study. 


Video Prototype



The next step of this was to study specific videos of people with eyemovenment during interaction and use Blender and the skeleton prototype to study how to express these movements. 



Paper Research 



Additional Paper Research and Research Progress record is available here. 

Paper Research Google Doc 
https://docs.google.com/document/d/1kLTic_vl4dRJ85NxtwdzEyo4ePgKiGNLl6qzDALP8xU/edit?usp=sharing 



Paper Research Miroboard 
https://miro.com/app/board/uXjVOKcwNk4=/