OK, this depends on you having a Macintosh computer, or a Linux computer... something with a UNIX-like command line. Open a terminal window and past the following line in:
for i in $(curl -s
http://allears.net/menu/menus.htm | egrep "men_|menu_" | sed "s/^.*a href=\"/http\:\/\/allears.net\/menu\//" | sed "s/.htm.*$/.htm/" | sed "s/menu\/\/menu/menu/"); do curl -s $i | egrep "<p>|h1|h3|title" | egrep -v "Found|error|htm|Banner|Subscribe|Archive|Plan|At-a-Glance|Menus\!|Dinner Shows" | sed "s/<img.*>//" | sed "s/<a.*>//g" | sed "s/title/h1/g" >> trim_menus.html; done
This will download the list of menus, then step through them, downloading each menu and sticking it into a single html document called trim_menus.html, after stripping out extraneous stuff and attempting to keep all of the menu items. I am sure that it burps on a few of the menus, not giving complete information, but it should get most of it.
Again, if you have a Mac, you can also do (on the command line in the Terminal):
textutil -convert docx trim_menus.html
and it will spit out a Word document called trim_menus.docx, which you can them open with Word. The original HTML file should open in most web browsers. The Word document, in my tests, ran to 688 pages or so. It also keeps the original copyright information in, because I felt like it should not be removed.