Billibon H. Yoshimi

Watch OUT!!! There is something behind you. Nah...Just foolin'. No, there is nothing wrong with your internet browser...yes, I am looking off to the left of your shoulder. This can happen to you too if you are distracted when your picture is being snapped! So... PAY ATTENTION! There will be a quiz after you finish reading this html. 30 minutes. Closed browser.

Ok. Ok. Enough with the clever kitch... on to the "Who am I?" and "What do I do?" questions.


Who am I?

My name is Billibon Yoshimi. FAQ 1: As you might have guessed, my first name is a hybrid, Japanese-English name concocted by my father. Its actually the concatination of my father's first name (Bon) and his best friend's first name (William).

I was born in Scottsbluff, Nebraska. I've spent part of my life living on a lake: Lake Minitare, Nebraska. Most of my formative years were spent in Scottsbluff though (6th grade-12th grade.)

In 1985, I attended Columbia University School of Engineering and Applied Sciences. I majored in computer science and developed an interest in human and robot vision.

Needless to say, in 1995, I finally finished my Ph.D. My dissertation, Visual Control of Robotic Tasks <1.4M> , examines how vision systems can be integrated with robots. Or basically, my thesis examines the issues faced by anyone wanting to give a robot elementary eye-hand coordination skills.

In 1992, I designed a sensory chessboard for the Deep Thought group at the IBM T. J. Watson Research Center. Deep Thought is the reigning world electronic chess champion. In 1995, I developed a complete audio interface to the world wide web for the ATT Applied Speech Research Lab in Murray Hill.


What are my research interests


Where to next?

After graduating from Columbia, I start as a Post-doctoral research fellow at the National Institute of Standards and Technology located in temperate Gaithersburg, MD.

Clipping normally found on back of papers

Billibon H. Yoshimi is a graduate research assistant working under Peter Allen in Computer Science at Columbia University. He received the B.S., M.S., and Ph.D. degrees in Computer Science from Columbia University. While at Columbia, he has received several awards including the ARVO-National Eye Institute Travel Award, the NCR Stakeholders Award, Columbia SEAS Bronze Key Award and an Outstanding Student Award. His current research interests include visually guided robot servoing, real-time robotics, and real-time/parallel vision algorithms. Billibon is also a member of IEEE.

Columbia University
Department of Computer Science
450 Computer Science Building
721 CESPR
New York, New York 10027
(office) 1 212 939 7117
(fax) 1 212 666 0140
yoshimi@cs.columbia.edu

Publications

Billibon H. Yoshimi. Visual Control of Robotic Tasks Ph.D. Dissertation (1995)
ABSTRACT Humans are at adept at eye-hand control. Robots are not. Even after many hours of laborious programming, a robot's skill is only passable. The goal of this thesis is to explore how vision can be used to give robot systems the compliance and ability necessary to perform many eye-hand coordination tasks. The thesis will describe the development of three robot systems which examine how vision can be used to control robotic tasks. The first system addresses the role of vision in tracking and grasping tasks. Since humans find little difficulty in visually tracking a target and then effecting a grasp, we determined that the tracking and grasping task was an appropriate vehicle to investigate the application of vision to robot control. The experimental system is composed of a real-time 3D motion stereo tracker, a robot-gripper system capable of real-time tracking, and a set of grasping strategies. The second system investigates how visual input can be used to control an uncalibrated eye-hand system. This system performs an alignment task by exploiting a simple observable geometric effect, rotational invariance. Combined with image Jacobian-based control, this system demonstrates that uncalibrated eye-hand control can be used in situations where calibration is not available. The final system re-examines the grasping and manipulation task. This system exploits closed-loop visual feedback to control a robot and gripper as they perform a complex task requiring dextrous manipulation. The goal of this research program is to examine several fundamental questions surrounding visual control. We will explore several different way to perform visual control and find out where and when they can be used. We will examine the suitability of vision for real-time control. We hope to show visual control is a viable option for performing real world tasks which require sensing in structured and unstructured environments.
Billibon H. Yoshimi and Peter K. Allen. Active, uncalibrated visual servoing. In 1994 IEEE International Conference on Robotics &Automation, volume 4, pages 156-161, San Diego, CA, May 1994.
We propose a method for visual control of a robotic system which does not require the formulation of an explicit calibration between image space and the world coordinate system. Calibration is known to be a difficult and error prone process. By extracting control information directly from the image, we free our technique from the errors normally associated with a fixed calibration. We demonstrate this by performing a peg-in-hole alignment using an uncalibrated camera to control the positioning of the peg. The algorithm utilizes feedback from a simple geometric effect, rotational invariance, to control the positioning servo loop. The method uses an approximation to the Image Jacobian to provide smooth, near-continuous control.
Billibon H. Yoshimi and Peter K. Allen. Visual Control of Grasping and Manipulation Tasks, In 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems Las Vegas, NV, Oct 2-5, 1994.
This paper discusses the problem of visual control of grasping. We have implemented an object tracking system that can be used to provide visual feedback for locating the positions of fingers and objects to be manipulated, as well as the relative relationships of them. This visual analysis can be used to control open loop grasping systems in a number of manipulation tasks where finger contact, object movement, and task completion need to be monitored and controlled.
P. Allen, A. Timcenko, B. Yoshimi, and P. Michelman. Automated tracking and grasping of a moving object with a robotic hand-eye system. IEEE Trans. on Robotics and Automation, 9(2):152-165, 1993.
Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for tracking and grasping a moving object. The focus of our work is to achieve a high level of interaction between a real-time vision system capable of tracking moving objects in 3-D and a robot arm equipped with a dexterous hand that can be used pick up a moving object. We are interested in exploring the interplay of hand-eye coordination for dynamic grasping tasks such as grasping of parts on a moving conveyor system, assembly of articulated parts or for grasping from a mobile robotic system. Coordination between an organism's sensing modalities and motor control system is a hallmark of intelligent behavior, and we are pursuing the goal of building an integrated sensing and actuation system that can operate in dynamic as opposed to static environments. The system we have built addresses three distinct problems in robotic hand-eye coordination for grasping moving objects: fast computation of 3-D motion parameters from vision, predictive control of a moving robotic arm to track a moving object, and grasp planning. The system is able to operate at approximately human arm movement rates, and we present experimental results in which a moving model train is tracked, stably grasped, and picked up by the system. The algorithms we have developed that relate sensing to actuation are quite general and applicable to a variety of complex robotic tasks that require visual feedback for arm and hand control.

Other cool sites and sometimes not-so-useless pieces of information


Friends (some old and some not so...)


Click here to go to the Robotics Lab page.

Click here to go to the Department of Computer Science page.