I am studying various accessibility technologies now because I want to create a simple (but I hope - useful) screen reader (it will be my university study project). So far I know 3 ways which most screen readers use to access elements on the screen and get their states (like - is button lit, pressed, inactive and so on):
1. Display driver interception, DDI. I have not tried it yet. It seems tricky and a bit dangerous, it can mess all the OS if not used carefully. This was good option for Win9x because there were no other good ways but now there are other technologies available on Windows. But maybe this approach still has some really good pros?
2. Windows API hooks. Less dangerous than DDI, but still tricky. Pros - it is familiar to me, and there are lots of articles and examples on the Web.
3. Microsoft Active Accessibility (now replaced by UI Automation in .NET Framework 3) . Cons - less documented then hooking. Pros - it is the way Microsoft recommends :-). I played a bit with it already. It seems easier to implement than point 2. But does it have the same power that hooking has?
So the question is to all those who have experience with these (or other) ways to access everything on the screen - can you share your experience a bit and tell, which approach would be the most effective (coding / usefulness ratio) ? Or anything I completely missed out?
Thanks.