sybase大数据量插入性能怎么样?怎么样提高呢?
我目前在做的需求是需要把后台生成的csv文件导入到sybase数据库,因为每条数据需要判断插入得目标表,只能每条读取了之后插入。我采取的是自己判断组装批插入的sql,例如:insert into WG_float_202007 (ObjectId, AttributeId, Seconds, Nanoseconds, Type, ErrorValue, Value) values(527433753,630,1592477964,74700000,'r',0,953.000000) union all (select 527433753,630,1592477924,984800000,'r',0,940.000000) union all (select 527433753,630,1592477864,861200000,'r',0,920.000000) union all (select 527433753,630,1592477937,17500000,'r',0,944.000000) union all (select 527433753,630,1592477894,922500000,'r',0,930.000000) union all (select 527433753,630,1592477855,842200000,'r',0,917.000000)这样的,我能找到的资料只有这一种批插入得方式,但是速度也不快,每秒最多才2k条,而且一次插入数据过多会报“There is not enough procedure cache to run this procedure, trigger, or SQL batch. Retry later, or ask your SA to reconfigure ASE with more procedure cache.”的错!我用的是jconnection4,sybase15.7版本,Java工程搞的。请教一下对于几百万条数据连续插入需要怎么搞,sql怎么写,怎么优化。