Jan 27, 2010
A Web Crawler for Automated Location of Genomic Resources
The purpose of this project is to develop a Web Crawler based software package that will locate and then download genomic data. A Web Crawler is a software package that searches the internet for particular strings of interest.The Web Crawler being developed here works by seeking out files in a set of webpages and then downloading them if they are of interest to the user. Whether the files are of interest to the user depends on the application. In this case the files of interest are Genomic Resources.
Author:-Steven James Mayocchi
Source:- The University of Queensland
About : Final-yearproject.com
Final-yearproject.com is one stop for all your project needs. You can find projects and seminars related to Electronics, Electrical, Civil, Mechanical, MBA, Pharmacy, Computer Science, IT, Bio-Tech, Robotics, Telecom, Wireless, Automation, BCA, MATLAB etc. You can Like us at Facebook and follow us on Twitter and Google+.