求救大侠英语水平,帮小弟翻译下。网上翻译就算了
1:Change "max_statements" to 0 and try... I'm not sure it will work, but that's what worked for me. The difference is that I was using Firebird, and there was already a related issue.
2:That would likely get rid of the statement cache messages, but unless I misunderstand the feature, it would be "fixed" at the cost of totally disabling the prepared statement cache. I'd kinda like to retain the performance value gained from reusing statements. But thanks anyway!
3:I'm surprised that no other theories or options have come out... Is there really no way to deal with the cached statement warnings other than to completely disable prepared statement caching? If so, that reduces the value of c3p0 to connection pool only. I'd appreciate any other insights on the subject... Thanks.
4:Mark,
May the StatementCache be with you.
Regarding your deadlocks, they are interesting. The tasks that are failing to terminate are not the Statement cache tasks, but Connection close() tasks. Nevertheless, past deadlocks have occurred because of Statement close was happening simultaneous to Connection close(), and I'm suspicious that we are seeing more of the same here, since you are clearly churning though a lot of PreparedStatements. (You have a zillion pending statement close tasks, and three Connection close tasks that are failing.)
0.9.0 cleared up the most obvious bad interaction between Statement and Connection close(), but it occurs to me that the strategy does not yet guarantee that a PreparedStatement and Connection close cannot occur asynchronously and near simultaneously by happenstance, especially when Statements are being close()ed quite frequently. So, my job is to tighten that up. [Even though in theory it should be foreseen and handled cleanly, many drivers don't like Statements to be used simultaneous to Connection ops or after Connection close().]
Your job is this: 1) take moonlight's suggestion for just long enough to persuade yourself that the statement cache is implicated in the apparent deadlock, and write back to let me know the problem goes away when statement caching is turned off; 2) try making max_statements much, much larger, or set it to 0 and set c3p0.maxStatementsPerConnection in c3p0.properties to as many distinct PreparedStatements as your application ever uses. This will both improve the performance of your application (statement caching doesn't much help if you are always churning through the statements), and make the deadlock you are seeing, presuming it is an artifact of statement close simultaneous with connection close, much less likely. The only likely downside of a large max_statements is the potential memory footprint (see how it works for you), but there may be many upsides. Do give it a shot.
Good luck!
Steve (c3p0 guy)
5:That will not likely be workable for me, as I am not running just a single program (ie tomcat) that can afford a bunch of new overhead. My environment is a bunch of separate daemons and jobs that connect to the db to do separate work, and I can't allow these N processes to all gain a bunch of new overhead.
I guess I'll leave caching off for now (which isn't currently a performance issue) and hope that there will be a concrete fix for it before I absolutely need to turn caching back on.
Thanks!
Mark
6:I am seeing similar issues with MySQL 4.1 using the JDBC driver version 3.1.11. We did find a bug in MySQL JDBC driver souce code where statement close & connection close could run into a deadlock. Is there any kill mechanism in c3p0 for a hanging close() call? Do we interrupt the thread or something like that?
Thanks Steve for your notes on making statement close less likely to be used. I haven't tried it but I'm pretty sure I wouldn't be able to use this because I'm running a highly multi-threaded text indexing application that is somewhat sensitive to memory usage.
7:ramvem,
c3p0-0.9.1 will have a "maxAdministrativeTaskTime" parameter, which will call interrupt() on c3p0's helper threads when an administrative task [Connection acquisition or test, Connection or cached Statement destruction] is taking too long. "APPARENT DEADLOCK"s will also include stack traces of the deadlocking tasks if on a 1.5+ VM (which offers the Thread.getStackTrace() method). Though these APPARENT DEADLOCKS seem to be a lot less common in 0.9.0.x than they used to be, they still do occur for some users, and I'm hoping to get a much better handle on why...
c3p0-0.9.1-pre7 will be available within a few days with this functionality (and quite a few other changes). If you have the luxury of being able to test a gree prerelease, please do give it a good hard go.
smiles,
Steve
8:I'm actually running into the same issue using hibernate-3.2.1.ga, c3p0-0.9.1-pre6, mysql-connector-java-5.0.4 on mysql 5.0.27. The caveat is that it happens when I run it under ServiceMix. The issue thread can be found here: http://www.nabble.com/issues-with-deadl ... 12049.html
I wanted to try upgrading to c3p0-0.9.1-pre7 but the c3p0 site is currently down. If you have any suggestions for me to troubleshoot, please let me know. Thanks in advance.
-los
9:hi,
please try c3p0-0.9.1-pre11. i think most of these APPARENT DEADLOCK issues are now resolved, though of course you can prove me wrong...
if you have trouble reaching c3p0's website on mchange.com (we're having some network issues), please go to sourceforge.net and search for c3p0. sourceforge hosts the actual releases.
good luck!
steve
10:Hi,
I actually switched to pre11 and its still giving me "too many connections" issues. Can I send you my stacktrace directly via email? Thanks.
-los
11:I've got it working. Its actually an issue with Hibernate 3.2.1.ga. Not changing any of my code, when I switched back to 3.2.0.ga using c3p0pre11, everything worked out fine. So its either a bug with hibernate or c3p0 needs to update to reflect any 3.2.1 changes (I think its the former though).
-los
12:I've been on pre11 and 3.2.1 for a little while and having these problems, usually when reloading web apps from the tomcat manager. Just rolled back to 3.2.0 and haven't had any problems yet.