Keith J Hanna, Age 58Brooklyn, NY

Keith Hanna Phones & Addresses

Brooklyn, NY

282 Cabrini Blvd APT 1G, New York, NY 10040

1965 Broadway, New York, NY 10023 (646) 852-6614

Bronxville, NY

Ewing, NJ

30 Scott Ave, Princeton Junction, NJ 08550 (609) 936-9381

West Windsor, NJ

Princeton, NJ

64 Kensington Rd APT 3B, Bronxville, NY 10708

Show more

Education

Degree: Associate degree or higher

Mentions for Keith J Hanna

Keith Hanna resumes & CV records

Resumes

Keith Hanna Photo 32

Founder, Chief Executive Officer

Location:
New York, NY
Industry:
Information Technology And Services
Work:
Iprd Group
Founder, Chief Executive Officer
Eyelock Jun 2014 - Jul 2015
Chief Innovation Officer
Eyelock Apr 2005 - Jun 2015
Chief Technology Officer
Sri International May 2003 - Apr 2005
Group Head, Vision Systems
Sri International Apr 1996 - May 2003
Smts, Vision Systems
Sri International Jan 1990 - Apr 1996
Mts, Vision Systems
Education:
University of Oxford 1986 - 1990
Doctorates, Doctor of Philosophy, Robotics, Philosophy
University of Oxford 1983 - 1986
Bachelors, Bachelor of Arts, Engineering, Electronics Engineering
Skills:
Start Ups, Software Development, Business Strategy, Product Management, Program Management, Integration, Project Management, Management, Strategic Planning, Business Development
Keith Hanna Photo 33

Keith Hanna

Work:
Personal Taxes 2005 - 2012
Correction Officer
Skills:
Editing, Event Planning, Public Relations, Nonprofits, Community Outreach, Fundraising, Data Entry, Teamwork, Facebook, Event Management, Creative Writing, Coaching, Spanish, Administration, Social Networking, Analysis, Time Management, Policy, Data Analysis, Team Leadership, Marketing, Organizational Development, Program Management, Leadership Development, Access, Volunteer Management, Higher Education, Government, Project Planning, Supervisory Skills, Military Operations, Security Clearance, Military, Military Experience, Operational Planning, Leadership, Strategic Planning, Team Building, Public Policy, Training, Social Media
Keith Hanna Photo 34

Keith Hanna

Keith Hanna Photo 35

Keith Hanna

Keith Hanna Photo 36

Keith Hanna

Keith Hanna Photo 37

Regional Sales Manager, Bti Systems, Inc.

Position:
Regional Sales Manager, North Central at BTI Systems
Location:
Cedar Rapids, Iowa
Industry:
Telecommunications
Work:
BTI Systems - Cedar Rapids, IA since Nov 2010
Regional Sales Manager, North Central
TeamQuest Corporation Sep 2006 - Nov 2010
Senior National Account Manager, Telecommunications
LignUp Corporation May 2005 - Jul 2006
Regional Sales Manager
Engage Communication, Inc.. May 2004 - May 2005
Regional Sales Manager
Allied Telesyn, Inc. Jun 2002 - May 2004
Sr. Account Director
Metro Xmit, LLC May 2001 - May 2002
Director of Business Development
Cisco Systems, Inc. Nov 1999 - Apr 2001
Major Account Manager
Lucent Technologies Sep 1998 - Nov 1999
Account Executive
DSC Communications Mar 1996 - Sep 1999
Account Director
Teradyne Telecom Division Jun 1989 - Mar 1996
Sales Application Engineer
Education:
University of Iowa 1984 - 1989
BS, Mechanical Engineering
Skills:
VoIP, Virtualization, Sales Management
Interests:
Golf, Sports, Photography, Art, Travel, Civil War History My Teams...Hawkeyes, Cubs, Bears, & Bulls
Honor & Awards:
TeamQuest Quota Club 2009 - Highest Revenue and % vs. Quota Achievement ITIL Certified 2007 - Foundations of IT Service Management Karrass Effective Negotiating Franklin Covey 7 Habits of Highly Effective People Allied Telesyn Pillar Award 3rd Quarter 2002 for Sales Process Excellence Cisco Systems Fast Start Account Manager of the Year FY 2000 GTE 1993 Vendor of the Year

Publications & IP owners

Us Patents

Method And Apparatus For Processing Images To Compute Image Flow Information

US Patent:
6430304, Aug 6, 2002
Filed:
Apr 18, 2001
Appl. No.:
09/837407
Inventors:
Keith James Hanna - Princeton NJ
Rakesh Kumar - Monmouth Junction NJ
James Russell Bergen - Hopewell NJ
Harpreet Singh Sawhney - W. Windsor NJ
Jeffrey Lubin - New York NY
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 936
US Classification:
382107, 382284
Abstract:
A method and apparatus for accurately computing parallax information as captured by imagery of a scene. The method computes the parallax information of each point in an image by computing the parallax within windows that are offset with respect to the point for which the parallax is being computed. Additionally, parallax computations are performed over multiple frames of imagery to ensure accuracy of the parallax computation and to facilitate correction of occluded imagery.

Apparatus For Enhancing Images Using Flow Estimation

US Patent:
6490364, Dec 3, 2002
Filed:
Jun 25, 2001
Appl. No.:
09/888693
Inventors:
Keith James Hanna - Princeton NJ
Rakesh Kumar - Monmouth Junction NJ
James Russell Bergen - Hopewell NJ
Harpreet Singh Sawhney - W. Windsor NJ
Jeffrey Lubin - New York NY
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 936
US Classification:
382107, 382299
Abstract:
A method and apparatus for accurately computing parallax information as captured by imagery of a scene. The method computes the parallax information of each point in an image by computing the parallax within windows that are offset with respect to the point for which the parallax is being computed. Additionally, parallax computations are performed over multiple frames of imagery to ensure accuracy of the parallax computation and to facilitate correction of occluded imagery.

Method And System For Rendering And Combining Images To Form A Synthesized View Of A Scene Containing Image Information From A Second Image

US Patent:
6522787, Feb 18, 2003
Filed:
Aug 25, 1997
Appl. No.:
08/917402
Inventors:
Rakesh Kumar - Dayton NJ
Keith James Hanna - Princeton NJ
James R. Bergen - Hopewell NJ
Padmanabhan Anandan - Lawrenceville NJ
Kevin Williams - Yarley PA
Mike Tinker - Yardley PA
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 940
US Classification:
382268, 382284
Abstract:
An image processing system for imaging a scene to mosaic, selecting a new viewpoint of the scene, and rendering a synthetic image from the mosaic of the scene from that new viewpoint. The synthesized image is then combined with a second image. The combination of the second image and the synthetic image generates a composite image containing a realistic combination of objects in the second image and the scene. Using this system, a production set or other scene need only be created once, then imaged by the system. Thereafter, through image processing, any view of the scene can be synthesized and combined with separately imaged performers or other objects to generate the composite image. As such, a production set or other scene can be repetitively reused without recreating the physical scene.

Method And Apparatus For Multi-View Three Dimensional Estimation

US Patent:
6571024, May 27, 2003
Filed:
Jun 18, 1999
Appl. No.:
09/336319
Inventors:
Harpreet Singh Sawhney - Cranbury NJ
Rakesh Kumar - Monmouth Junction NJ
Yanlin Guo - Plainsboro NJ
Jane Asmuth - Princeton NJ
Keith James Hanna - Princeton NJ
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 932
US Classification:
382294, 382284, 382154, 348 42
Abstract:
An apparatus and method for generating automated multi-view three dimensional pose and geometry estimation for the insertion of realistic and authentic views of synthetic objects into a real scene. A multi-view three dimensional estimation routine comprising the steps of feature tracking, pairwise camera pose estimation, computing camera pose for overlapping sequences and performing a global block adjustment to provide camera pose and scene geometric information for each frame of a scene. A match move routine may be used to insert a synthetic object into one frame of a video sequence based on the pose and geometric information of the frame, and calculate all other required object views of the synthetic object for the remaining frames using the pose and geometric information acquired as a result of the multi-view three dimensional estimation routine.

Method And Apparatus For Performing Geo-Spatial Registration Of Imagery

US Patent:
6597818, Jul 22, 2003
Filed:
Mar 9, 2001
Appl. No.:
09/803700
Inventors:
Rakesh Kumar - Monmouth Junction NJ
Stephen Charles Hsu - Cranbury NJ
Keith Hanna - Princeton NJ
Supun Samarasekera - Princeton NJ
Richard Patrick Wildes - Princeton NJ
David James Hirvonen - Princeton NJ
Thomas Edward Klinedinst - Doylestown PA
William Brian Lehman - Mount Holly NJ
Bodgan Matei - Piscataway NJ
Wenyi Zhao - Plainsboro NJ
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 932
US Classification:
382294, 382284, 382305, 707102
Abstract:
A system and method for accurately mapping between image coordinates and geo-coordinates, called geo-spatial registration. The system utilizes the imagery and terrain information contained in the geo-spatial database to precisely align geodetically calibrated reference imagery with an input image, e. g. , dynamically generated video images, and thus achieve a high accuracy identification of locations within the scene. When a sensor, such as a video camera, images a scene contained in the geo-spatial database, the system recalls a reference image pertaining to the imaged scene. This reference image is aligned very accurately with the sensors images using a parametric transformation. Thereafter, other information that is associated with the reference image can easily be overlaid upon or otherwise associated with the sensor imagery.

Method And Apparatus For Estimating Feature Values In A Region Of A Sequence Of Images

US Patent:
6681058, Jan 20, 2004
Filed:
Mar 6, 2000
Appl. No.:
09/518872
Inventors:
Keith Hanna - Princeton NJ
Rakesh Kumar - Monmouth Junction NJ
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 932
US Classification:
382294, 382170
Abstract:
A method and apparatus are disclosed that estimate the brightness or other feature values of unchanging or slowly changing regions of an image in a sequence of video images even when the regions is obscured by objects over large portions of the video sequence. The apparatus and method generate a histogram for each image region position over a plurality of image frames in the sequence. The mode, or most frequently occurring value, of the image region as indicated by the histogram is selected as representing the unchanging portion of the image. The mode values of all of the regions are then assembled to form a composite image of the unchanging or slowly changing feature values. According to one method, the histogram is generated using a recursive filter. In order to process images that exhibit some motion from frame to frame, the images in the video sequence may be aligned before generating the histogram. If the camera produces artifacts such as variations in the image caused by an automatic gain control (AGC) function, each image in the sequence of video images may be filtered either temporally or spatially before performing the histogramming operation to remove these artifacts.

Fully Automated Iris Recognition System Utilizing Wide And Narrow Fields Of View

US Patent:
6714665, Mar 30, 2004
Filed:
Dec 3, 1996
Appl. No.:
08/759346
Inventors:
Keith James Hanna - Princeton NJ
Peter J. Burt - Princeton NJ
Shmuel Peleg - Princeton NJ
Douglas F. Dixon - Hopewell NJ
Deepam Mishra - Plainsboro NJ
Lambert E. Wixson - Rocky Hill NJ
Robert Mandlebaum - Philadelphia PA
Peter Coyle - Newtown PA
Joshua R. Herman - Robbinsville NJ
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
G06K 900
US Classification:
382117, 382106, 382190, 382209
Abstract:
A recognition system which obtains and analyzes images of at least one object in a scene comprising a wide field of view (WFOV) imager which is used to capture an image of the scene and to locate the object and a narrow field of view (NFOV) imager which is responsive to the location information provided by the WFOV imager and which is used to capture an image of the object, the image of the object having a higher resolution than the image captured by the WFOV imager is disclosed. In one embodiment, a system that obtains and analyzes images of the irises of eyes of a human or animal in an image with little or no active involvement by the human or animal is disclosed. A method for obtaining and analyzing images of at least one object in a scene comprising capturing a wide field of view image of the object to locate the object in the scene; and then using a narrow field of view imager responsive to the location information provided in the capturing step to obtain higher resolution image of the object is also disclosed.

Tweening-Based Codec For Scaleable Encoders And Decoders With Varying Motion Computation Capability

US Patent:
6907073, Jun 14, 2005
Filed:
Dec 6, 2000
Appl. No.:
09/731194
Inventors:
Harpreet Singh Sawhney - West Windsor NJ, US
Rakesh Kumar - Monmouth Junction NJ, US
Keith Hanna - Princeton NJ, US
Peter Burt - Princeton NJ, US
Norman Winarsky - Princeton NJ, US
Assignee:
Sarnoff Corporation - Princeton NJ
International Classification:
H04B001/66
US Classification:
37524014
Abstract:
A scaleable video encoder has one or more encoding modes in which at least some, and possibly all, of the motion information used during motion-based predictive encoding of a video stream is excluded from the resulting encoded video bitstream, where a corresponding video decoder is capable of performing its own motion computation to generate its own version of the motion information used to perform motion-based predictive decoding in order to decode the bitstream to generate a decoded video stream. All motion computation, whether at the encoder or the decoder, is preferably performed on decoded data. For example, frames may be encoded as either H, L, or B frames, where H frames are intra-coded at full resolution and L frames are intra-coded at low resolution. The motion information is generated by applying motion computation to decoded L and H frames and used to generate synthesized L frames. L-frame residual errors are generated by performing inter-frame differencing between the synthesized and original L frames and are encoded into the bitstream.

NOTICE: You may not use PeopleBackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. PeopleBackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.