I have some large flat files. One supplies an id style field that gets used for a filter expression. The second is a complete set of data. An mvimport chugs through the second file without application timeout. When I read the 1st file, apply the filter on the second file I run into an application timeout sometime after about 830 lines. This doesn't make any sense and I can't figure where the resources are being exhausted. This script is not a module.
I have also loaded the first file into a single column array so the import isn't embedded, the import still needs to be embedded inside an mvwhile or foreach loop. Either technique results in an application timeout.
I guess I am looking for a way to reset the timeout if that is possible? In my pseudo code I guess it would need to be before the out loop cycles to its next import?
There is another idea(just thought of) to keep one import loop and use the single column array and use a counter++ in the index[counter++] to grab the value of that position. Will that work? I'm skeptical atm.
FYI: Some real numbers from the current dataset: 1st file has about 9400 records, the second file has about 65000 records and a couple dozen columns.
Thanks,
Scott
Code:
MvImport 'thefirstfile' field-is-filterrecord MvImport 'thesecondfile' ( filterrecord EQ secondfilefield cycle through loaded lines that match filter expression /MvImport /MvImport
I guess I am looking for a way to reset the timeout if that is possible? In my pseudo code I guess it would need to be before the out loop cycles to its next import?
There is another idea(just thought of) to keep one import loop and use the single column array and use a counter++ in the index[counter++] to grab the value of that position. Will that work? I'm skeptical atm.
FYI: Some real numbers from the current dataset: 1st file has about 9400 records, the second file has about 65000 records and a couple dozen columns.
Thanks,
Scott
Comment