470 likes | 565 Views
Introducing Natural Language Program Analysis. Lori Pollock, K. Vijay-Shanker, David Shepherd, Emily Hill, Zachary P. Fry, Kishen Maloor. NLPA Research Team Leaders. K. Vijay-Shanker “The Umpire”. Lori Pollock “Team Captain”. University of Delaware. Problem.
E N D
Introducing Natural Language Program Analysis Lori Pollock, K. Vijay-Shanker, David Shepherd, Emily Hill, Zachary P. Fry, Kishen Maloor
NLPA Research Team Leaders K. Vijay-Shanker “The Umpire” Lori Pollock “Team Captain” University of Delaware
Problem Modern software is large and complex Software development tools are needed object oriented class hierarchy
Successesin Software Development Tools • Good with local tasks • Good with traditional structure object oriented class hierarchy
Issuesin Software Development Tools • Scattered tasks are difficult • Programmers use more than traditional program structure object oriented class hierarchy
Observationsin Software Development Tools public interface Storable{... //Store the fields in a file.... Key Insight:Programmers leave natural language clues that can benefit software developmenttools undo action public void Circle.save() update drawing activate tool save drawing object oriented system
Studies on choosing identifiers So, I could use x, y, z. But, no one will understand my code. I don’t care about names. • Impact of human cognition on names [Liblit et al. PPIG 06] • Metaphors, morphology, scope, part of speech hints • Hints for understanding code • Analysis of Function identifiers [Caprile and Tonella WCRE 99] • Lexical, syntactic, semantic • Use for software tools: metrics, traceability, program understanding Carla, the compiler writer Pete, the programmer
Our Research Path • Motivated usefulness of exploiting natural language (NL) clues in tools • Developed extraction process and an NL-based program representation • Created and evaluated a concern location tool and an aspect miner with NL-based analysis [MACS 05, LATE 05] [AOSD 06] [ASE 05, AOSD 07, PASTE 07]
pic Name: David C Shepherd Nickname: Leadoff Hitter Current Position: PhD May 30, 2007 Future Position: Postdoc, Gail Murphy Stats Year coffees/day redmarks/paper draft 2002 0.1 500 2007 2.2 100
How can I fix Paul’s atrocious code? Applying NL Clues for Aspect Mining Aspect-Oriented Programming Molly, the Maintainer Aspect Mining Task Locate refactoring candidates
Timna: An Aspect Mining Framework [ASE 05] • Uses program analysis clues for mining • Combines clues using machine learning • Evaluated vs. Fan-in • Precision (quality) and Recall (completeness) P R 37 2 62 60 Fan-In Timna
Integrating NL Clues into Timna • iTimna (Timna with NL) • Integrates natural language clues • Example: Opposite verbs (open and close) P R 37 2 62 60 8173 Fan-In Timna iTimna Natural language information increases the effectiveness of Timna [Come back Thurs 10:05am]
Applying NL Clues for Concern Location 60-90% software costs spent on reading and navigating code for maintenance* (fixing bugs, adding features, etc.) Motivation *[Erlikh] Leveraging Legacy System Dollars for E-Business
Key Challenge: Concern Location Find, collect, and understand all source code related to a particular concept Concerns are often crosscutting
State of the Art for Concern Location • Mining Dynamic Information [Wilde ICSM 00] • Program Structure Navigation [Robillard FSE 05, FEAT, Schaefer ICSM 05] • Search-Based Approaches • RegEx [grep, Aspect Mining Tool 00] • LSA-Based [Marcus 04] • Word-Frequency Based [GES 06] Reduced to similar problem Slow Fast Fragile Sensitive No Semantics
Limitations of Search Techniques • Return large result sets • Return irrelevant results • Return hard-to-interpret result sets
1. More effective search The Find-Concept Approach 2. Improved search terms Source Code 3. Understandable results concept Method a Concrete query Find-Concept Method b Method c NL-based Code Rep Recommendations Method d Method e Natural Language Information Result Graph
Underlying Program Analysis • Action-Oriented Identifier Graph (AOIG) [AOSD 06] • Provides access to NL information • Provides interface between NL and traditional • Word Recommendation Algorithm • NL-based • Stemmed/Rooted: complete, completing • Synonym: finish, complete • Combining NL and Traditional • Co-location: completeWord()
Experimental Evaluation Find Concept, GES, ELex • Research Questions • Which search tool is most effective at forming and executing a query for concern location? • Which search tool requires the least human effort to form an effective query? • Methodology: 18 developers completenine concern location tasks on medium-sized (>20KLOC) programs • Measures: Precision (quality), Recall (completeness), F-Measure (combination of both P & R)
Across all tasks Overall Results • Effectiveness • FC > Elex with statistical significance • FC >= GES on 7/9 tasks • FC is more consistent than GES • Effort • FC = Elex = GES FC is more consistent and more effective in experimental study without requiring more effort
Natural Language Extraction from Source Code What was Pete thinking when he wrote this code? • Key Challenges: • Decode name usage • Develop automatic extraction process • Create NL-based program representation Molly, the Maintainer
Natural Language: Which Clues to Use? • Software Maintenance • Typically focused on actions • Objects are well-modularized Maintenance Requests
Natural Language: Which Clues to Use? • Software Maintenance • Typically focused on actions • Objects are well-modularized • Focus on actions • Correspond to verbs • Verbs need Direct Object (DO) • Extract verb-DO pairs
Extracting Verb-DO Pairs Extraction from comments Two types of extraction class Player{ /** * Play a specified file with specified time interval */ public static boolean play(final File file,final float fPosition,final long length) { fCurrent = file; try { playerImpl = null; //make sure to stop non-fading players stop(false); //Choose the player Class cPlayer = file.getTrack().getType().getPlayerImpl(); … } Extraction from method signatures
Extracting Clues from Signatures • POS Tag Method Name • Chunk Method Name • Identify Verb and Direct-Object (DO) public UserList getUserListFromFile( String path ) throws IOException { try { File tmpFile = new File( path ); return parseFile(tmpFile); } catch( java.io.IOException e ) { thrownew IOrException( ”UserList format issue" + path + " file " + e ); } } POS Tag get<verb> User<adj> List<noun> From <prep>File <noun> Chunk get<verb phrase> User List<noun phrase>FromFile <prep phrase>
pic Name: Zak Fry Nickname: The Rookie Current Position: Upcoming senior Future Position: Graduate School Stats Year diet cokes/day lab days/week 2006 1 2 2007 6 8
Developing rules for extraction verb DO For many methods: • Identify relevant verb (V) and direct object (DO) in method signature • Classify pattern of V and DO locations • If new pattern, create new extraction rule verb DO DO verb
Verb DO parse URL Left Verb dragged mouse Right Verb saved host Generic Verb - message Unidentified Verb Our Current Extraction Rules • 4 general rules with subcategories: URL parseURL() void mouseDragged() void Host.onSaved() void message()
Left Verb Verb-DO pair: <remove, UserID> Example: Sub-Categories for Left-Verb General Rule Look beyond the method name: Parameters, Return type, Declaring class name, Type hierarchy
Representing Verb-DO Pairs Action-Oriented Identifier Graph (AOIG) verb1 verb2 verb3 DO1 DO2 DO3 verb1, DO1 verb1, DO2 verb3, DO2 verb2, DO3 use use use use use use use use source code files
Representing Verb-DO Pairs Action-Oriented Identifier Graph (AOIG) play add remove file playlist listener play, file play, playlist remove, playlist add, listener use use use use use use use use source code files
Evaluation of Extraction Process • Compare automatic vs ideal (human) extraction • 300 methods from 6 medium open source programs • Annotated by 3 Java developers • Promising Results • Precision: 57% • Recall: 64% • Context of Results • Did not analyze trivial methods • On average, at least verb OR direct object obtained
pic Name: Emily Gibson Hill Nickname: Batter on Deck Current Position: 2nd year PhD Student Future Position: PhD Candidate Stats Year cokes/day meetings/week 2003 0.2 1 2007 2 5
Ongoing work: Program Exploration • Purpose: Expedite software maintenance and program comprehension • Key Insight: Automated tools can use program structure and identifier names to save the developer time and effort
Natural Language Query • Maintenance request • Expert knowledge • Query expansion • Program Structure • Representation • Current: call graph • Seed starting point • Relevant Neighborhood • Subgraph relevant to query Dora the Program Explorer* Query Dora Relevant Neighborhood * Dora comes from exploradora, the Spanish word for a female explorer.
State of the Art in Exploration • Structural (dependence, inheritance) • Slicing • Suade [Robillard 2005] • Lexical (identifier names, comments) • Regular expressions: grep, Eclipse search • Information Retrieval: FindConcept, Google Eclipse Search [Poshyvanyk 2006]
ExampleScenario Motivating need for structural and lexical information • Program: JBidWatcher, an eBay auction sniping program • Bug: User-triggered add auction event has no effect • Task: Locate code related to ‘add auction’ trigger • Seed: DoAction() method, from prior knowledge
DoAdd() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoPasteFromClipboard() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() Looking for: ‘add auction’ trigger Using only structural information DoAction() • DoAction() has 38 callees, only 2/38 are relevant RelevantMethods • Locates locally relevant items, but many irrelevant And what if you wanted to explore more than one edge away? Irrelevant Methods
Looking for: ‘add auction’ trigger Using only lexical information • 50/1812 methods contain matches to ‘add*auction’ regular expression query • Only 2/50 are relevant • Locates globally relevant items, but many irrelevant
DoAction() DoAdd() DoPasteFromClipboard() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() DoNada() Looking for: ‘add auction’ trigger Combining Structural & Lexical Information • Structural: guides exploration from seed RelevantNeighborhood • Lexical: prunes irrelevant edges
The Dora Approach • Determine method relevance to query • Calculate lexical-based relevance score • Low-scored methods pruned from neighborhood • Recursively explore Prune irrelevant structural edges from seed
Calculating Relevance Score:Term Frequency Query: ‘add auction’ • Score based on query term frequency of the method 6 query term occurrences Only 2 occurrences
Calculating Relevance Score:Location Weights Query: ‘add auction’ • Weigh term frequency based on location: • Method name more important than body • Method body statements normalized by length ?
Dora explores ‘add auction’ trigger From DoAction() seed: • Correctly identified at 0.5 threshold • DoAdd() (0.93) • DoPasteFromClipboard() (0.60) • With only one false positive • DoSave() (0.52)
Summary • NL technology used Synonyms, collocations, morphology, word frequencies, part-of-speech tagging, AOIG • Evaluation indicates Natural language information shows promise for improving software development tools • Key to success Accurate extraction of NL clues
Our Current and Future Work • Basic NL-based tools for software • Abbreviation expander • Program synonyms • Determining relative importance of words • Integrating information retrieval techniques
Posed Questions for Discussion • What open problems faced by software tool developers can be mitigated by NLPA? • Under what circumstances is NLPA not useful?