Terry L Lyon, Age 751430 Silk Oak Dr, Fort Collins, CO 80525

Terry Lyon Phones & Addresses

1430 Silk Oak Dr, Fort Collins, CO 80525 (970) 223-9978

5344 Middlebrook Dr, Rochester, MN 55901 (507) 292-0789

3655 41St St, Rochester, MN 55901 (507) 292-0789

Saint Paul, MN

Austin, TX

Work

Position: Professional/Technical

Education

Degree: Graduate or professional degree

Mentions for Terry L Lyon

Terry Lyon resumes & CV records

Resumes

Terry Lyon Photo 40

Senior Engineer

Location:
Rochester, MN
Industry:
Computer Hardware
Work:
Ibm
Senior Engineer
Skills:
Embedded Systems
Terry Lyon Photo 41

Terry Lyon

Terry Lyon Photo 42

Terry Lyon

Terry Lyon Photo 43

Terry Fam Melinda Lyon

Terry Lyon Photo 44

Terry Lyon

Terry Lyon Photo 45

Terry Lyon

Terry Lyon Photo 46

Terry Lyon

Terry Lyon Photo 47

Terry Lyon

Publications & IP owners

Us Patents

Method And System For Early Tag Accesses For Lower-Level Caches In Parallel With First-Level Cache

US Patent:
6427188, Jul 30, 2002
Filed:
Feb 9, 2000
Appl. No.:
09/501396
Inventors:
Terry L Lyon - Fort Collins CO
Eric R DeLano - Ft Collins CO
Dean A. Mulla - Saratoga CA
Assignee:
Hewlett-Packard Company - Palo Alto CA
International Classification:
G06F 1200
US Classification:
711122
Abstract:
A system and method are disclosed which determine in parallel for multiple levels of a multi-level cache whether any one of such multiple levels is capable of satisfying a memory access request. Tags for multiple levels of a multi-level cache are accessed in parallel to determine whether the address for a memory access request is contained within any of the multiple levels. For instance, in a preferred embodiment, the tags for the first level of cache and the tags for the second level of cache are accessed in parallel. Also, additional levels of cache tags up to N levels may be accessed in parallel with the first-level cache tags. Thus, by the end of the access of the first-level cache tags it is known whether a memory access request can be satisfied by the first-level, second-level, or any additional N-levels of cache that are accessed in parallel. Additionally, in a preferred embodiment, the multi-level cache is arranged such that the data array of a level of cache is accessed only if it is determined that such level of cache is capable of satisfying a received memory access request. Additionally, in a preferred embodiment the multi-level cache is partitioned into N ways of associativity, and only a single way of a data array is accessed to satisfy a memory access request, thereby preserving the remaining ways of a data array to save power and resources that may be accessed to satisfy other instructions.

Updating And Invalidating Store Data And Removing Stale Cache Lines In A Prevalidated Tag Cache Design

US Patent:
6470437, Oct 22, 2002
Filed:
Dec 17, 1999
Appl. No.:
09/466306
Inventors:
Terry L Lyon - Fort Collins CO
Assignee:
Hewlett-Packard Company - Palo Alto CA
International Classification:
G06F 1210
US Classification:
711207, 711144
Abstract:
In a computer architecture using a prevalidated tag cache design, logic circuits are added to enable store and invalidation operations without impacting integer load data access times and to invalidate stale cache lines. The logic circuits may include a translation lookaside buffer (TLB) architecture to handle store operations in parallel with a smaller, faster integer load TLB architecture. A store valid module is added to the TLB architecture. The store valid module sets a valid bit when a new cache line is written. The valid bit is cleared on the occurrence of an invalidation operation. The valid bit prevents multiple store updates or invalidates for cache lines that are already invalid. In addition, an invalidation will block load hits on the cache line. A control logic is added to remove stale cache lines.

Mechanism For Broadside Reads Of Cam Structures

US Patent:
6493792, Dec 10, 2002
Filed:
Jan 31, 2000
Appl. No.:
09/495155
Inventors:
Stephen R. Undy - Ft. Collins CO
Terry L Lyon - Fort Collins CO
Assignee:
Hewlett-Packard Company - Palo Alto CA
International Classification:
G11C 1500
US Classification:
711108, 711128, 365 49
Abstract:
A CAM providing for the identification of a plurality of multiple bit tag values stored in the CAM, having logic circuitry for comparing each bit of an inputted test value to the corresponding bits of all stored tag values. A bit select is employed for generating a plurality of test bits for sequential input into the logic circuitry. The logic circuitry compares the plurality of test bits to the corresponding bit of each stored tag value and generates a âhitâ signal if the selected bit is the same as the corresponding bit of the stored tag value. Storage means are employed for recording the results of the compare with the M hit signal.

Apparatus And Method For Virtual Address Aliasing And Multiple Page Size Support In A Computer System Having A Prevalidated Cache

US Patent:
6493812, Dec 10, 2002
Filed:
Dec 17, 1999
Appl. No.:
09/465722
Inventors:
Terry L Lyon - Fort Collins CO
Assignee:
Hewlett-Packard Company - Palo Alto CA
International Classification:
G06F 1210
US Classification:
711207, 711108, 711210, 711128
Abstract:
A computer micro-architecture employing a prevalidated cache tag design includes circuitry to support virtual address aliasing and multiple page sizes. Support for various levels of address aliasing are provided through a physical address CAM, page size mask compares and a column copy tag function. Also supported are address aliasing that invalidates aliased lines, address aliasing with TLB entries with the same page sizes, and address aliasing the TLB entries of different sizes. Multiple page sizes are supported with extensions to the prevalidated cache tag design by adding page size mask RAMs and virtual and physical address RAMs.

L1 Cache Memory

US Patent:
6507892, Jan 14, 2003
Filed:
Feb 21, 2000
Appl. No.:
09/510285
Inventors:
Dean A. Mulla - Saratoga CA
Terry L Lyon - Fort Collins CO
Reid James Riedlinger - Fort Collins CO
Tom Grutkowski - Fort Collins CO
Assignee:
Hewlett-Packard Company - Palo Alto CA
Intel Corporation - Santa Clara CA
International Classification:
G06F 1300
US Classification:
711131, 711117, 711119, 711149
Abstract:
The inventive cache processes multiple access requests simultaneously by using separate queuing structures for data and instructions. The inventive cache uses ordering mechanisms that guarantee program order when there are address conflicts and architectural ordering requirements. The queuing structures are snoopable by other processors of a multiprocessor system. The inventive cache has a tag access bypass around the queuing structures, to allow for speculative checking by other levels of cache and for lower latency if the queues are empty. The inventive cache allows for at least four accesses to be processed simultaneously. The results of the access can be sent to multiple consumers. The multiported nature of the inventive cache allows for a very high bandwidth to be processed through this cache with a low latency.

Cache Chain Structure To Implement High Bandwidth Low Latency Cache Memory Subsystem

US Patent:
6557078, Apr 29, 2003
Filed:
Feb 21, 2000
Appl. No.:
09/510283
Inventors:
Dean A. Mulla - Saratoga CA
Terry L Lyon - Fort Collins CO
Reid James Riedlinger - Fort Collins CO
Thomas Grutkowski - Fort Collins CO
Assignee:
Hewlett Packard Development Company, L.P. - Houston TX
Intel Corporation - Santa Clara CA
International Classification:
G06F 1300
US Classification:
711122, 711131, 711120, 711119
Abstract:
The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misses the L cache every clock, and support one instruction access every clock. The responses are interspersed in the pipeline, so that conflicts in the queue are minimized. Non-conflicting accesses are not inhibited, however, conflicting accesses are held up until the conflict clears. The inventive cache provides out-of-order support after the retirement stage of a pipeline.

Masking Error Detection/Correction Latency In Multilevel Cache Transfers

US Patent:
6591393, Jul 8, 2003
Filed:
Feb 18, 2000
Appl. No.:
09/507208
Inventors:
Shawn Kenneth Walker - Fort Collins CO
Dean A. Mulla - Saratoga CA
Terry L Lyon - Fort Collins CO
Assignee:
Hewlett-Packard Development Company, L.P. - Houston TX
International Classification:
G11C 2900
US Classification:
714763
Abstract:
Methods and apparatus mask the latency of error detection and/or error correction applied to data transferred between a first memory and a second memory. The method comprises determining whether there is an error in a data unit in the first memory; transferring data based on the data unit from the first memory to a second memory, wherein the transferring step commences before completion of the determining step; and disabling at least part of the second memory if the determining step detects an error in the data unit. The disabling step may be accomplished, for example, by disabling the buffering of an address of the data unit or stalling the second memory.

Parallel Distributed Function Translation Lookaside Buffer

US Patent:
6625714, Sep 23, 2003
Filed:
Dec 17, 1999
Appl. No.:
09/466494
Inventors:
Terry L Lyon - Fort Collins CO
Assignee:
Hewlett-Packard Development Company, L.P. - Houston TX
International Classification:
G06F 1210
US Classification:
711207, 711168
Abstract:
In a computer system, a parallel, distributed function lookaside buffer (TLB) includes a small, fast TLB and a second larger, but slower TLB. The two TLBs operate in parallel, with the small TLB receiving integer load data and the large TLB receiving other virtual address information. By distributing functions, such as load and store instructions, and integer and floating point instructions, between the two TLBs, the small TLB can operate with a low latency and avoid thrashing and similar problems while the larger TLB provides high bandwidth for memory intensive operations. This mechanism also provides a parallel store update and invalidation mechanism which is particularly useful for prevalidated cache tag designs.

NOTICE: You may not use PeopleBackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. PeopleBackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.