The new Etch installer does all this for you and more -- I leave this up for people that want to transition from an existing system to raid without reinstalling - might be easier to back-up and reinstall. This page also provides clues as to how to recover from many situations - so I will leave it up.
Late breaking news (4/05)
The newer Debian installers have Raid (and also LVM) working, so this guide should not be needed unless you are transitioning from a standard set up to raid without reinstalling. It might also be helpful if you are debugging a broken raid.
Now for 2.6 kernel version
I've read that there will soon be an installer that will do raid installs and perhaps even support SATA, but today it is manual. My install on a Intel D865PERL mother board got 'interesting'. The last Debian (beta 4) testing installer does support SATA as does a version of Debian/Libranet, but going on to RAID is a manual task.
The basic idea was to Install all on one drive, then partition the second drive with just the same sizes. Install mdadm (the new and improved replacement for raidtools). I'm assuming we are using clean drives. If one re-uses previously used disks the superblocks must be zeroed (--zero-superblock option to mdadm) before adding the partitions to the array.
This guide was produced using a Tyan S2875 Moter board.
The whole mess is harder than it should be and I hope this page becomes the basis for someone's automating script (hint hint).
Overview of steps
Install Debian on first drive (/dev/sda)
Creating a degraded RAID1 array on disk 2 (/dev/sdb)
Update initrd
Copying the Debian installation over from disk 1 (sda to sdb)
Fix fstab on /Dev/md2
Add disk 1 to the degraded array to create final raid array
Update initrd again
Produce /etc/mdadm/mdadm.conf
Setup the monitoring daemon