you have good reason to be concerned as DB shrinks in SQL Server often "suck". Paul Randal, the head of the Storage Engine in SQL 2005 stated, ShrinkDB is written very poorly. It will find empty space by taking the data at the very end and put it in the very beginning and keep doing this until it has free space at the 'end' of the DB files. At this point it can then release the space from SQL Server and give it back to the OS. You are effectively reversing your database files thus you will see massive fragmentation usually. You can read about his views on this blog post or on this MCM Internals Video
As with everything, you really must test these in your environment first. A better way of doing it is to move data to a different filegroup. You can do a online index rebuild with the clustered index and then reindex in the new filegroup. Then you can drop the old one and release the space and have almost no fragmentation. Note this will take about 120% extra space while it's working through it. The problem with this is that you need even additional free space which it looks like you might not have. This is an enterprise feature.
If free space is at that much of a premium, then you might have to bite the bullet and slowly shrink the DB a small chunk at a time to avoid long running processes. Note that your data will be heavily fragmented and you will want to reindex everything again. Note that after reindexing everything, you will then balloon up your used space a bit and go back to having additional free space. See Brent's advice here.
As far as how much free space is good for you, is a matter of how much you can afford fragmentation and file growth activities. With IFI enabled, the file growth is almost instant but you still get fragmentation. A good rule of thumb is to preallocate as much as space as you think you'll need, or monitor the growth and adjust in chunks periodically if you have to. This keeps physical fragmentation down.
Also log file growth is a lot more important. Additional log files can cause VLF fragmentation. This makes your restores a lot slower and can affect checkpoint/truncates. Here's some performance risks you take with a fragmented log. Do a DBCC LOGINFO(); on each database. Try to keep the number around 50ish per Kim Tripp, but if you see hundreds, you have fragmentation issues which means your log files had to grow to support operations. A good way to see what your log file should be per Paul Randal is to just let it grow for a week and reindex. That could be a good point, perhaps you can throw a bit more free space on there just in case. Make sure your logs are not fragmented with DBCC LOGINFO(); again and if they are, it means they grew a lot. Shrink and re-expand the log file using this method.