Probably not what you're looking for but I use a Chrome Extension to monitor pages or parts of pages. I actually use it to get notifications if something I want at Ikea is in stock (because their online ordering system is stupid / useless).
Most of the software I've seen and used ids focused at the page level and there is a reason for that.
For an entire website you would be looking at having a system that would spider the site, take a snapshot of pages and then compare then over time. It would be quite 'intensive' to do for a large site and would need to re-spider every so often at a low enough rate that it wasn't a nuisance that would get blocked.
Rather than doing that yourself, it may be better to hook into existing open source web indexes. Obviously it relies on them spidering JW.org for that to work:
I can't remember the name of it but there is one - the idea is that there is a single re-usable index (like "Google Lite") that people can use instead of everyone individually spidering and crawling websites. I'll let you know if I remember / figure out what it is.