Quick Method to wget my local wiki... need advice (without dumping mysql)

I need advice.

I have a webserver vm (LAN, not on the internet), it has 2 wikis:

I want to wget only the homework wiki pages, without crawling into the GameWiki?

My goal is to just get the .htmls (ignore all other files images etc), with wget. (I dont want to do a mysqldump or mediawiki export, but rather wget for my (non-IT) boss who just wants to double click the html).

How can I run wget to only crawl the HomeWorkWiki, and not the GameWiki on this VM.

Thanks

1 Answer

The solution was either to use httrack, and customize the wizard carefully, or this brilliant one liner with wget:

echo "robots = off" > ~/robots.txt ; wget --mirror --convert-links --html-extension --no-parent --wait=0 ""

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like