150 likes | 165 Views
Design and implement a full-text retrieval engine based on Lucene, providing flexible retrieval methods and support for natural language retrieval. Includes a web crawler for indexing and searching website content.
E N D
Design a full-text search engine for a website based on Lucene Presented by: Lijia Li, Yingyu Wu, Xiao Zhu
Outline • Introduction • Our goal • System architecture • Conclusion and future work • Show demo
Introduction • With the development of the network, the amount of information on the Internet showed explosive growth, increased the difficulty of finding the target information, the search engine has brought great convenience to people looking for information, internet has become an indispensable tool.
Our goal • In this project, our goal is to implement a full-text retrieval engine based on Lucene.
Full-text retrieval engine • The full-text search engine based on the entire text retrieval technology for indexing and searching. • Features: (1) The unstructured index file database (2) Flexible retrieval methods (3) Support nature language retrieval (4) Retrieval efficiency
System Architecture • Search Engine is used to provide searching service to users. Our search engine has two main parts: online and offline.
Users User Interface Enter keyword Online analyzer Result sorting Search module Search Index File Index module website offline Request Website database webpage crawler
Lucene Why • The index file format independent of the application platform • Inverted index • Object-oriented system architecture • Chinese parser (SmartchineseAnalyzer, IKAnalyzer) • Implement a set of powerful Query engine(RangeQuery, FuzzyQuery……) • Open Source
Web Crawler Analysis robots.txt Get robots.txt Page database Collection of start URL URL Analysis Page fetch module Internet URL Unprocessed URL queue Page analysis module Extract Links Architecture of web crawler
Work flow of web crawler • Extract the initial URL into unprocessed URL queue • Get a URL address from the head of the queue • Download pages according to their URL • Extract hyperlink from the download page • Extracted hyperlinks added to unprocessed URL queue • Check whether the unprocessed URL queue is null if yes the program will be terminated otherwise step 2 will be executed. 7. Loop
Index Asetof documents to be index Read and Analysis document no Whether Indexed? yes no Date of index ealier than the creation data yes Determine the type of document yes Whether exist same type no Parse document Call the corresponding document parser to parse document Build index file Work flow
Document indexing steps 1. Creating a IndexWriter instance IndexWriter writer = new IndexWriter(indexPath, analyzer, boolean, maxFieldLength) 2. Creating a recode of Document Document doc = new Document() 3. Add Field Object in recode of Document doc.add(new Filed(string, tokenstream)) 4. Write recode of Document in Index writer.addDocument(doc); 5. Close Index Writer Object, end indexing writer.close()
Flow chart of searching start Example: User input: “ 大连理工 计算机”, “america ohio” After QueryParser: “大连理工” AND“计算机”, “america” AND “ohio” Accept search string from user QueryParser analyze search string, output Query object Set up Searcher IndexSearcher object search related document in Index File Output related document end
Highlight search key word Get position value of search key word Get fragment of search key word, according position value of search key word Use HTML and CSS attributes to highlight search key word
Conclusion and future work • What we learn through this project is how to use web crawler and Lucene to implement a full-text search engine. • Working on hadoop • Thank you!