Hi
I'm in the middle of a migration project from IQ 12.7 to IQ 16 SP4.5. Currently, 96GB is allocated for IQ on the new RHEL server and I have configured Main Cache, Temp Cache and Large Memory to 32GB each. However, I have problems with loading of relatively wide tables. When loads into two wide tables run concurrently, one of LOADs fail and I receive following messages in IQ Message Log:
I. 11/04 12:23:29. 0000000070 Exception Thrown from slib/s_llhtable.cxx:187, Err# 13, tid 2 origtid 2
I. 11/04 12:23:29. 0000000070 O/S Err#: 0, ErrID: 517 (hos_memexception); SQLCode: -1013130, SQLState: 'QBC75', Severity: 14
I. 11/04 12:23:29. 0000000070 [22160]: All IQ large memory has been used, allocation canceled [size: 33554495]
-- (slib/s_llhtable.cxx 187)
The size "33554495" is always the same.
When I increase -iqlm to 48GB the problem doesn't occur. Monitoring via sp_iqsysmon (-section lma) reveals that when run separately, the first load allocates about 33GB of "Large Memory Inflexible" and the second load allocates about 10GB.
When run concurrently, the consumption of "Large Memory Inflexible" is about 42 GB. The first table has 766 columns and the second one 572, mostly NBit FP indexes.The number of rows inserted in each load is not big at all, fewer than a million. There are no conversions to flat FP during the loads.I use default values for FP_NBIT_Autosize_Limit and FP_NBIT_Lookup_MB.
Questions:
1. Do such massive memory requirements make sense at all? Does it sound like a normal behavior or a bug?
2. I read the section about the large memory cache in the IQ Sizing Guide. Still, it is not clear to me how to calculate the requirements for loads of NBit indexes. Specifically, is there a way to calculate it precisely based on the output of sp_iqindexmetadata (TokenCount, CountSize, DictSize)?
3. In IQ 15.*, FP(3) indexes proved to be problematic. In many cases, FP(3) brought more harm than good, in some cases I had to just ban using of such indexes entirely to avoid a significant drop in performance. Now we have NBit indexes, which potentially can grow even beyond the limits of FP(3). Specifically, I have tens of NBit indexes with >16 bits on the first table in my test case. Should it be considered a good practice? Maybe it worth converting such indexes to FlatFP, geven that I prefer better performance over storage savings?
Thanks in advance
Leonid Gvirtz