When we try to export the information of a web page, we usually copy the block of the desired information into a word processor by using copy/paste. But this procedure has many disadvantages; the information is difficult to manipulate because it is imported in the web page format, with hyperlinks, size, fonts, etc., all mixed inside tables. With Visual Web Ripper you can avoid these problems. First of all, you can open the desired web page from the Visual Web Ripper main window. This allows you to explore the web page content and choose only the information you wish to extract. You can extract not only text and images, but also capture HMTL content, tags, files, links, and attributes. Before capturing the information, you have to create a project selecting the elements to capture. The information of the element that is going to be captured as content or HTML code is displayed in the upper center of the screen. You can use templates to extract blocks of information of the same type. After you have selected all the elements to capture, you can save the extracted data in many different formats: XML, SQL Server, MySQL, OleDb, CSV, and Excel XML. Finally, to begin extracting the data you have to run the project. You can choose the "view browsing" option to visualize how the program navigates on the webpage and extracts the information.
- It allows you to capture text, html, tags, files, links, images, and attributes.
- You can save the extracted data in many different formats.
- Trial version can extract only a maximum of 100 web elements.