I would like the 64GB limit on NSF and DB2NSF databases to be greatly increased. This limit has been in place since 5.0 and it causes serious problems for large and complex applications. Our web applications store hundreds of GB of medical data (mostly XML and other structured text formats) and so we have had to write a lot of extra code to break up the data into smaller chunks but still allow end users to query it all together.
According to the IBM developers I've talked to, it wouldn't be too difficult to remove the limit. The main issue is that they would have to do a lot of extra testing to ensure reliability. So if this idea gets enough votes perhaps they will commit to do that testing.
The new data document compression feature coming in 8.0.1 and DAOS in 8.5 will help marginally but really don't solve the problem.