Embedded Eye

Give your gizmo the gift of sight

125 milligram optical flow sensor "TinyTam"




As an exercise in size reduction, we have prototyped a complete optical flow sensor in a 125 milligram and 7mm x 7mm package. This mass includes optics, image sensing, and all processing. Above is a video and a close-up photograph. In the video, note the green vector indicating measured optical flow as a result of image motion.

Image sensor: Centeye Tamalpais 16x16 pixel image sensor (only an 8x8 block is being used), 1.3mm x 4.1mm, focal plane about 0.3mm x 0.3mm.

Optics: Proprietary printed pinhole, about 25 microns wide

Processor: Atmel ATtiny84

Optical flow algorithm: Modified "Image Interplation" algorithm, originally developed by Prof. Mandyam Srinivasan (well known for his research on honey bee vision and navigation).

Frame rate: About 20Hz.

This work is being performed as part of Centeye's participation in the Harvard University Robobees project, an NSF-funded project to build a robotic bee. The final target mass for the complete vision system (including processing) will be on the order of between 10mg to 25mg, and will include omnidirectional sensing as well as algorithms to detect flowers. Obviously we still have some more work to do!




Views: 2116

Comment by Hari on November 3, 2010 at 3:20am
Hi
seems it can sense motion. Is it possible to sense 3D or 2D motion( like PC mouse) form this sensors ?
are there any external things should be used to sense relative motion ?
Comment by Geoffrey L. Barrows on November 3, 2010 at 10:25am
This one is programmed to just sense 2D motion. The algorithm could be extended to detect things like divergence or curl, but I am not sure how much more that will tax the processor.
Comment by Hari on November 4, 2010 at 12:21am
I would like to clear few more doubts.
1. Is that 2D motion is linear motion on 2 axis ( x,y ) ? or is it angular motion ?
2. what would be the position accuracy ? for example, 0.5mm like other motion sensing systems by vision.
3. what is the maximum operating frequency in frames per sec(FPS) ?
Thanks
Comment by Ravi Gaddipati on November 4, 2010 at 10:02pm
A thought occurred to me.
When tracking optic flow, how is the parallax affect taken care of? When the sensor is moving, the foreground will appear to move faster than the background. Won't this skew results?
What would happen if a feature point (or cluster of pixels) was caught on an edge of two objects with different distances (though the cluster probably wouldn't match up again)?
Comment by Geoffrey L. Barrows on November 5, 2010 at 11:57am
@Harinath- 1) Both actually. The algorithm senses motion across the image that rests on the pixels- that is 2D planar motion. But the optics effectively projects those pixels out into the world in an angular fashion. So if you rotate the sensor you get O.F. based on the rate of rotation. If you translate the sensor (move it sideways w/o rotating) then you get O.F. based on the translational motion and the distance to the objects in view.

In practice, though, we can consider O.F. as an angular quantity, measured in degrees per second or (easier to use) radians per second.

2) The pixel pitch on the chip is 18 microns. The pinhole is maybe 150 microns above the chip surface, so you get 18/150 = 0.12 radians per pixel, if the pinhole were suspended in air. But the pinhole is actually printed on plastic, which in turn is placed onto the chip. Since the index of refraction of plastic is greater than that of air, the pixels are further spread out, probably to about 0.18 or so radians per pixel. This is 0.18 * 180 / pi = 10.3 degrees or so. However the algorithms being used can sense sub-pixel motion, so probably we can sense motion down to a fifth of a pixel (guesstimate that depends on all sorts of factors) or 0.036 radians or 2.1 degrees or so.

To convert this into translational motion, multiply the radians angle by the distance to the texture causing optical flow. If the sensor is 50cm above ground, then 0.036 radians is equivalent to 0.036 * 50cm = 1.8cm of translational motion. This is a rough estimate but hopefully you get the idea.

3) I think that one runs at about 20Hz maybe 25Hz. So the max optical flow would be 20 pixels/sec * 0.18 rad/pixel = 3.6 rad/sec. We haven't verified this yet rigorously- this is just an estimate.

@Ravi- Good questions. 1) The OF algorithm on this sensor assumes uniform optical flow, which is rather idealistic. So if you have close objects and far objects, the resulting measurement will be a blend depending on which object has the greater contrast. There are other optical flow algorithms that exist that can handle multi-levels of texture as well as "motion transparency' for example looking at the distant world through a chain link fence, with the latter causing more OF than the former. We just haven't put them in this sensor yet.

2) Cluster of pixels at the edge of two objects- you get occlusion effects, where the texture of the background object vanishes or appears as a result of the foreground object's motion. Classical optical flow algorithms cannot handle this situation natively. Personally I think this area is ripe for invention.
Comment by Phani Gaddipati on November 7, 2010 at 10:23pm
Can you upload the code?
Comment by Tom on November 8, 2010 at 6:28am
I'm interested in quadrotor indoor attitude stabilisation using visual information (possibly using onboard processor), and am very inspired by the short article above. Therefore 2 questions:

1. Is it possible to buy such a sensor together with optics (I've already read Centeye's products list and your, very interesting, article dealing with lenses choice)?
2. Why have you only used 8x8 out of the 16x16 pixels available: does it have to do with optics or rather the processing power?
Comment by Geoffrey L. Barrows on November 8, 2010 at 8:08am
@Phani- Yes, I can upload the project files later today. It won't be commented well, however, since we only wrote it to get it working.

@Tom- Thank You for your feedback regarding the optics discussion. I have more to add- I think this is an area that is often neglected in image processing and robotics coursework.
1. We will be making a few more of the 125mg sensor, but these are already earmarked for our partners on the Robobees team. But we could manufacture these in larger qtys if there is adequate interest- I figure very few people would want a sensor at that scale, at least for the time being. For wider distribution, we are having other boards made that can be used to form a sub-gram sensor, and we are also designing a sensor based on the Arduino board that will probably weigh two grams or so. Would these be more adequate for what you need?
2. 8x8 vs. 16x16- We chose 8x8 for now mostly due to processing speed. We haven't yet put forth any effort to optimize the optical flow code for speed or memory usage.
Comment by Tom on November 8, 2010 at 8:27am
A sensor weighting 2 grams goes perfectly well with quadrotors, so great to hear. Will there be a choice of lenses? I'd like to experiment with fisheye lenses as well.

When do you think they could be available and around which price (if you can give it away already)?
Comment by Geoffrey L. Barrows on November 8, 2010 at 8:45pm
Hi Tom,
Availability- Not sure yet. Will try to finalize the board design this week. The big question to answer is how developed everyone wants any algorithms to be before they feel comfortable using them.
Optics- The 2g mass assumes usage of printed optics- a topic I need to post on- and excludes any larger lens assemblies. The printed optics though can get a pretty wide FOV- in theory approaching 180 degrees but in practice a bit less.
Cost- TBD
Geof

Comment

You need to be a member of Embedded Eye to add comments!

Join Embedded Eye

© 2017   Created by Geoffrey L. Barrows.   Powered by

Badges  |  Report an Issue  |  Terms of Service