Embedded Eye

Give your gizmo the gift of sight

Geoffrey L. Barrows
  • Male
  • Washington, DC
  • United States
Share on Facebook Share on Facebook Share Twitter

Geoffrey L. Barrows's Friends

  • Hugo Capucho
  • Scott Gardner
  • Sanjeev Koppal
  • Alicia Gibb
  • Qingwen Liu
  • Randy Mackay
  • Scott Schuff
  • Chris Shake
  • Craig Neely
  • Jason Mayes
  • Tom
  • Jared Napora
  • Mark Murphy
  • Eugenio Culurciello
  • John Karvounis

Geoffrey L. Barrows's Discussions

How to hook up a Stonyman Breakout directly to an Arduino (without the Rocket Shield)
1 Reply

If you don't have an ArduEye Rocket Shield, you can still hook up a Stonyman breakout board to an Arduino. All you need to do is make these connections:G connects to  GROUNDV connects to 5VRP…Continue

Started this discussion. Last reply by lucas johnson yesterday.

ArduEye Aphid Standalone Example #1: Generate PWM based on light position

INTRODUCTIONAs an experiment, I've decided to make a few ArduEye Aphid examples written specifically for an Aphid, and written for "very easy" use. Specifcially, these examples will meet the…Continue

Started Mar 22, 2013

ArduEye Aphid Initial Tutorial (Part 3): Exploring the ArduEye Examples

IntroductionOn the ArduEye Wiki site there are three "Utility Sketches" that demonstrate different aspects of the ArduEye libraries. All three of…Continue

Started Mar 14, 2013

ArduEye Aphid Initial Tutorial (Part 2): Sample Firmware Shipped with Sensor

Note: It is recommended that you look through the "Getting Started Guide" on the ArduEye site (www.ardueye.com). In particular, look through the sections on…Continue

Started Mar 13, 2013

 

Geoffrey L. Barrows's Page

Latest Activity

lucas johnson replied to Geoffrey L. Barrows's discussion How to hook up a Stonyman Breakout directly to an Arduino (without the Rocket Shield)
"Hello, I´ve tried hooking up the stonyman breakout directly to an arduino uno and the image looks like those old tvs in which the image keeps scrolling up and down. Any thoughts on why thats happening? Thanks"
yesterday
Jeff K commented on Geoffrey L. Barrows's blog post Why an optical mouse does not work when you hold it up
"Thanks for this explanation article. I have a question regarding this part: I may be failing horribly to follow the math (have a history of doing that) but is it possible to have an additional lens placed at about the desktop distance (r1?) under…"
Apr 5
Geoffrey L. Barrows commented on Peer Hansen's blog post laser distance module - 40meter (120')
"The ultrasound device method might work, but we've found that they tend to work best against a flat surface perpendicular to the sound direction of travel. The water well is a perfect example (I remember clapping my hand above wells in medieval…"
Dec 10, 2013
Geoffrey L. Barrows replied to Geoffrey L. Barrows's discussion Use of Stonyman chips with an Arduino Due (or other 3.3V ARM)
"Clinton- I *finally* purchased a Teensy 3.0 to play with. It looks pretty nice! -Geof"
Nov 22, 2013
Geoffrey L. Barrows replied to greg witteman's discussion Libraries and Due or "Teensy-3"/arm-cortex processors
"Greg- I just bought a Teensy 3.0 to play with. This thing looks nice! I agree the specs are nice (decent speed, 32 bits, that nice ADC, and a whopping 2.9 grams!). It shouldn't be too difficult to make a breakout board that connects one (or…"
Nov 22, 2013
Geoffrey L. Barrows posted a discussion

How to hook up a Stonyman Breakout directly to an Arduino (without the Rocket Shield)

If you don't have an ArduEye Rocket Shield, you can still hook up a Stonyman breakout board to an Arduino. All you need to do is make these connections:G connects to  GROUNDV connects to 5VRP connects to Arduino pin D8IP connects to Arduino pin D7RV connects to Arduino pin D6IV connects to Arduino pin D5PH connects to Arduino pin D4AN is the analog output from the Stonyman chip and connects to Arduino pin A0OE may be left unconnected.The ArduEye Rocket Shield basically makes these…See More
Aug 5, 2013
Ron Patey left a comment for Geoffrey L. Barrows
"Hello Geoffrey. Yes, the reason for the resistor was to equalize the gain, not necessarily the offset. In this way, all the control lines to each eye can be paralleled. I was expecting to have to subtract from a white level reading kept in my…"
May 31, 2013
Geoffrey L. Barrows commented on Ron Patey's blog post EYE results, 32 levels reduced to 16 levels for the LCD screen.
"It is interesting you used two different resistors to equalize the values. That would work, but as the resistance gets lower so could the gain. Have you tried adding a constant offset to the values from one chip?"
May 30, 2013
Geoffrey L. Barrows replied to Tomas Garcia's discussion Replacing onboard receiver on a micro air vehicle
"It is pretty easy to hover with one camera and one range sensor (say sonar). We've been able to make a quad hover, barely, with just the camera, but we get better results with two or more. For the demo using that coaxial 'mCX, the 18 cm…"
May 30, 2013
Geoffrey L. Barrows replied to Jianguo Zhao's discussion About wire bonding the Stonyman Vision Chip to our PCB
"I've never seen the current be more than several milliamps but that really depends on the bias settings."
May 30, 2013
Geoffrey L. Barrows replied to Mangibu's discussion Binning
"Good catch! Yes, at 4x4 blocks you have 28x28 super pixels, not 23x23. That is a typo to be fixed... You could use 0xEE instead of 0x77 for downsampling by 4x4, however the super pixels would be shifted by a pixel and thus the boundaries between…"
May 30, 2013
Geoffrey L. Barrows replied to Joe Watson's discussion FPN Patterns for different resolutions
"Hi Joe, Good question! This is one we've asked internally but haven't really tried yet. In theory the answer is "yes", but I don't know in practice. Currently we just use a different FPN mask for each binning amount. (It…"
May 30, 2013
Geoffrey L. Barrows replied to Tomas Garcia's discussion Replacing onboard receiver on a micro air vehicle
"Good question. I don't want to be discouraging, but it took quite a bit of effort to get the "vision based hover in place" demo working, especially at that size. (That was a $1.4M DARPA-funded project with many twists and turns before…"
May 30, 2013
Geoffrey L. Barrows replied to Jianguo Zhao's discussion About wire bonding the Stonyman Vision Chip to our PCB
"We can do custom wirebonding jobs. It is not that expensive but the cost can vary depending on the circuit board and tolerances etc. As for the footprint for the Stonyman- the layouts from either the Aphid or the Stonyman Breakout Board are fine.…"
May 30, 2013
Bobb Craig commented on Geoffrey L. Barrows's blog post Internet of Things (IoT) car traffic counter camera using an ArduEye and COSM
"Geoffrey, First of all, thanks for a very interesting site and product. Second, how soon to you think the Aphid's will be on the street? And third, after reading your comment about hooking the project up to a low power micro controller, I…"
May 7, 2013
Dan Loesch replied to Geoffrey L. Barrows's discussion Use of Stonyman chips with an Arduino Due (or other 3.3V ARM)
"I have finals this coming week and after that I am leaning into this full boar.  You are going to have to adjust the port mapping to that of the Arduino Due.  I'm attaching links to explain this and give you port tables.  I hope…"
Apr 20, 2013

Profile Information

Hometown (All questions are optional)
Washington, DC
About me
Founder of Centeye, Inc.
Technical interest
optics, image sensor chips, vision hardware, image processing, ground robotics, airborne robotics, sensor networks, art or music
Website
http://www.centeye.com

Geoffrey L. Barrows's Blog

Updated Stonyman data sheet

We have updated the Stonyman data sheet to include software exemples (in Pseudocode). Version 1.0 is here: Stonyman_Hawksbill_ChipInstructions_Rev10_20130312.pdf

Posted on March 12, 2013 at 11:57am

ArduEye Aphids first batch now available

We have a small batch of ArduEye Aphids that are now available for serious beta users. An Aphid is essentially a clone of an Arduino Pro Mini but with a Stonyman vision chip and an external 12-bit 100ksps ADC added. For more details and for ordering, please visit the…

Continue

Posted on March 6, 2013 at 9:30am

Internet of Things (IoT) car traffic counter camera using an ArduEye and COSM

Eager to make a little foray into the "Internet of Things", I decided to experiment with the use of an ArduEye as an "Eye for the IoT". My house is on a fairly busy street, of which I have a good vantage point from my attic home office. A car traffic counter seemed like a good choice…

Continue

Posted on February 19, 2013 at 5:00pm — 3 Comments

BCIT students make eye tracking computer input device, for ALS patients, using Centeye chip

This is the type of news that makes everything we do worthwhile! This past January, I was contacted by Alex Sayer, Alan Kwok, and Benny Chick, students at the British Columbia Institute of Technology (BCIT) in Vancouver, who wanted to use some of our Tam2 chips for a class project in which they would provide a human-computer interface (HCI) for people suffering from ALS (Lou Gehrig's disease) and could not operate a computer using their hands. Their idea: Use a low resolution image sensor,…

Continue

Posted on August 13, 2012 at 5:56pm

Comment Wall (4 comments)

At 6:59am on February 28, 2011, Hari said…

Dear sir,

I'm doctoral student interested to work on these chips for 3-Dimensional position (linear motion) tracking with good precession( <3mm) , accuracy and fast update-rate(>50Hz) . If it is possible for you to design the chips to meet the above requirements, we will place an order as soon as possible. I would like to find an alternative to WII remote .

Thanks

Harinath

At 8:44am on February 17, 2012, Nicola Massari said…

Hi Geoffrey,

 

I am really sorry about my late answer! Actually your question is complex and need more time to answer it. I personally think that, in general, vision sensors are complex detectors and usually need strong knowledge and in particular strong experience in visual processing in specific applications. I am not particularly expert in processing and often I trust to my intuition to design my sensors and sometimes I realize that this is not the correct way to proceed. On the other hand the potentiality of vision sensors is really high. Just think about human eye can do and  how many tasks we can perform only thanks to eyes!

 

Thank you for your interest!

 

Regards

 

Nicola

 

At 10:42pm on March 22, 2012, Paul Atkinson said…

 Hi Geof,

               Thanks for the comment recently. I did received the Tam2's chip yesterday. I have seen some info about how to use the 5-pins on the rox1 board such as:Vdd(onboard),Gnd(onboard),ANA(onboard),IO6 & IO7 onboard represents CLKIN AND CLB(RESET) respectively if that's the correct order or vice-versa of the two I/O pins onboard. Just like to know if those are 5 pins used for the tam2 chip?, also does ANAOUTBIAS and COLOOUTBIAS pins available on the rox1 board whereas a 1- 10k is tied to them to Gnd for bias purpose. Thanks again! with lots of appreciation. 

PS. Are these lens pinhole type, if possible do you know the focal length?.

At 7:19pm on May 31, 2013, Ron Patey said…

Hello Geoffrey. Yes, the reason for the resistor was to equalize the gain, not necessarily the offset. In this way, all the control lines to each eye can be paralleled. I was expecting to have to subtract from a white level reading kept in my permanent MRAM. This handles the offset.

 I didn't expect to have to change the gain by the scanning software, but doing that handles 'noise' beautifully, especially vertical ghosts... It disappears!  I keep a 'black' level in permanent MRAM for each pixel as well as the white level reference. The stored 'black' level is calculated as a fraction to speed things up. The black reference scan uses a darkish target under the same light level as the white reference scan. (Too dark and you make the range too large.)

The fraction is calculated to make the black reference produce a black level 32 pixel counts below the white level. The 8 bit multiplication of this 8 bit fraction with the sampled pixel creates 16 bits PRODH/PRODL in a single cpu cycle and the PRODH byte is the gain-corrected result of a normal scan. Simple and fast.

After that, each picture is brightness adjusted to handle differing lighting conditions. It reduces the 32 level pixel readings to 16 levels, (4 bits) which matches my LCD resolution. I can turn my very-bright lighting LEDs off or leave them on in my living room with only a small difference on the LCD screen.

Amazingly, the fluorescent and LED AC light patterns are not really noticeable... I see a moire fringe every now an then.  I think your photsensors do integration just with high impedance RC filtering. I will have to check  that sometime.

But I still use the LED strip, 350ma, behind the eyes because they make great reference scans.

So, I have adjusted for offset, gain and brightness .... the resistor allows the signals to be in the right range..important only because I am paralleling the control signals to each eye. I am extremely happy with the results... wish I got a faster cpu, but I have simulated finding the eyes, nose and mouth on my laptop using vertical darkness gradients and this will mean only scanning 1/5 or less of the pixels at high speed, once their location has been identified.

The gradients take pixels 2 above and 2 below the pixel in question and if the real pixel is darker, I multiply the differences by a constant number. The value of the pixel is not used, just the difference between it and the others. If the pixel is good, it is kept, if not, it is set to full white. I then get an average of the non-white pixel values and show as full black only the ones below (darker than) the average. The result is really fun. The constant number can change, the actual value of the pixel or a portion of it can be included and the results are still interesting. Gradient therapy. I will also  try varying the 'constant' value in feed forward format to keep the displayed pixels just enough to used for decisions.

I might also change the range of pixels surrounding the pixel in question. I tried this on my laptop and it didn't matter much. I'm looking for edges of horizontal things and the vertical gradient method does this well.

I also tried shifting the whole image left and right to create minumum difference between two eye images. WOW. fun. It works just as well for the gradient screen as the original full screen. Processing this is slow... so gradients would be better.

 

 

You need to be a member of Embedded Eye to add comments!

Join Embedded Eye

 
 
 

© 2014   Created by Geoffrey L. Barrows.   Powered by

Badges  |  Report an Issue  |  Terms of Service