LibraryPhysical Science & MathComputer Science

Experience-Based Language Acquisition

A Computational Model of Human Language Acquisition

by Brian E. Pangburn

Share/Bookmark
Paperback e-Book PDF
Institution: Louisiana State University
Advisor(s): Dr. S. Sitharama Iyengar (Committee Chair)
Degree: Ph.D. in Computer Science
Year: 2002
Volume: 142 pages
ISBN-10: 1581121717
ISBN-13: 9781581121711

Abstract

Almost from the very beginning of the digital age, people have sought better ways to communicate with computers. This research investigates how computers might be enabled to understand natural language in a more humanlike way. Based, in part, on cognitive development in infants, we introduce an open computational framework for visual perception and grounded language acquisition called Experience-Based Language Acquisition (EBLA). EBLA can watch a series of short videos and acquire a simple language of nouns and verbs corresponding to the objects and object-object relations in those videos. Upon acquiring this protolanguage, EBLA can perform basic scene analysis to generate descriptions of novel videos.

The general architecture of EBLA is comprised of three stages: vision processing, entity extraction, and lexical resolution. In the vision processing stage, EBLA processes the individual frames in short videos, using a variation of the mean shift analysis image segmentation algorithm to identify and store information about significant objects. In the entity extraction stage, EBLA abstracts information about the significant objects in each video and the relationships among those objects into internal representations called entities. Finally, in the lexical acquisition stage, EBLA extracts the individual lexemes (words) from simple descriptions of each video and attempts to generate entity-lexeme mappings using an inference technique called cross-situational learning. EBLA is not primed with a base lexicon, so it faces the task of bootstrapping its lexicon from scratch.

The performance of EBLA has been evaluated based on acquisition speed and accuracy of scene descriptions. For a test set of simple animations, EBLA had average acquisition success rates as high as 100% and average description success rates as high as 96.7%. For a larger set of real videos, EBLA had average acquisition success rates as high as 95.8% and average description success rates as high as 65.3%. The lower description success rate for the videos is attributed to the wide variance in entities across the videos.

While there have been several systems capable of learning object or event labels for videos, EBLA is the first known system to acquire both nouns and verbs using a grounded computer vision system.

About The Author

Brian Edward Pangburn was born in Melrose, Massachusetts, on June 12, 1972. He received his bachelor of science degree in mechanical engineering from Tulane University in 1994. He joined the doctoral program at Louisiana State University in 1994, and received his master of science degree in system science in 1999. His will receive the degree of Doctor of Philosophy in computer science at the December, 2002 commencement. Since 1991, Mr. Pangburn has developed software to administer retirement plans. From 1994 to 1996, he developed software as a consultant for American Financial Systems in Weston, Massachusetts. In 1996, he co-founded The Pangburn Company, a third party administration company dealing exclusively with nonqualified deferred compensation retirement plans. Apart from language acquisition modeling, his research interests include artificial intelligence, computer vision, and database theory. Mr. Pangburn has a wife, Jaimee, and twin sons, Jack and Jeremiah. He and his family reside in Ventress, Louisiana.