I need a pool struct that can reuse/recycle the memory after usage, i.e., DON'T do any allocation or deallocation on-the-fly (although you still need to allocate the memory at the moment when the program starts)
Boost.Pool does not support such a mechanism; is there any alternative?
...
I'm doing some heavy processing (building inverse indices) using ints/ longs in Java.
I've determined that (un)boxing of standard java.collections maps takes a big portion of the total processing time. (as compared to a similiar implementation using arrays, which I can't use due to memory constraints).
I'm looking for a fast 3rd-part...
I have a number of search functions (stored procedures) which need to return results with exactly the same columns.
This is the approach that was taken:
Each stored procedure had the general structure:
CREATE TABLE #searchTmp (CustomerID uniqueIdentifier)
INSERT INTO #searchTmp
SELECT C.CustomerID FROM /**** do actual search h...
I am working on optimizing one of the SQL Job.
Here I have few places where we have used <> operator. THe same query can be replaced using NOT EXISTS operator. I am just wondering which is better way of doing it.
Sample Query
If(@Email <> (select Email from Members WHERE MemberId = @MemberId))
--Do Something.
--Same thing can be wr...
I have a query as below:
SELECT * FROM Members (NOLOCK)
WHERE Phone= dbo.FormatPhone(@Phone)
Now here I understand that formatting has to be applied on the variable on column. But should I apply it on variable to assign to some other local variable then use it (as below).
Set @SomeVar = dbo.FormatPhone(@Phone)
SELECT *
FROM Me...
I have a long running job. The records to be processed are in a table with aroun 100K records.
Now during whole job whenever this table is queried it queries against those 100K records.
After processing status of every record is updated against same table.
I want to know, if it would be better if I add another table where I can update...
I have a job with around 100K records to process. I have got many suggestions to split this job in chunks and then process it.
What are the benefits of process smaller chunks of data compared to 100K records?
What is the standard way of doing it? e.g. Picking 10K records in a temp table and process at a time?
...
We have a monitoring application built on swt and running on linux. we have few buttons and a dynamic part that changes as we click on these buttons. The problem is that if some ones click too rapidly the cpu could reach 100% and hanging forever. We observed this rapid cpu spikes only on Ubuntu Linux where as windows it runs without on i...
As a learning experience I recently tried implementing Quicksort with 3 way partitioning in C#.
Apart from needing to add an extra range check on the left/right variables before the recursive call, it appears to work quite well.
I knew beforehand that the framework provides a built-in Quicksort implementation in List<>.Sort (via Array....
Which is more efficient if there are numerous checks?
bool exists=File.Exists(file);
or
bool exists= check db list of existing files;
...
We have a (large) SELECT query, that can take ~30 seconds to run. I am told that when placed in a view, it takes less than 5 seconds to run.
My assumption is that SQL Server caches query plans for queries that don't change, so why the massive improvement in performance here?
Just to be clear, this really is just a case of taking somet...
Hi,
I'm writing an application in Python with Postgresql 8.3 which runs on several machines on a local network.
All machines
1) fetch huge amount of data from the database server ( lets say database gets 100 different queries from a machine with in 2 seconds time) and there are about 10 or 11 machines doing that.
2) After processing ...
Hi there...
I tried to google this and came up a little short, so maybe someone here can shed some light on the topic.
For url rewriting purposes in asp.net, I would like to declare all images and other resources in my application with the runat="server" attribute to take advantage of the "~/images" server path syntax. Debugging on ...
I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
#!/usr/bin/env python
from Crypto.Cipher ...
Hi,
Are there some noticeable outcomes in terms of performance or other aspects to follow semantic HTML?
Thanks
...
We have a number of items coming in from a web service; each item containing an unknown number of properties. We are storing them in a database with the following Schema.
Items
- ItemID
- ItemName
Properties
- PropertyID
- PropertyName
- PropertyValue
- PropertyValueType
- TransmitTime
- ItemID [fk]
The properties tabl...
I need to access (read in) data repeatedly from a database in my java codes, and therefore adopted JDBC. However, it seems to me that using JDBC takes up a lot of memory. I tried to be careful about closing the objects created for JDBC (ResultSet, Statemenet), but still it seems to hog a lot of memeory especially compared to reading in i...
This question is like a continuation of my previous question:
http://stackoverflow.com/questions/1722155/am-i-right-that-innodb-is-better-for-frequent-concurrent-updates-and-inserts-than/
But this time I have concrete questions.
We know that MyISAM is faster than InnoDb when we don't have many concurrent updates (inserts). When we have ...
Do we need to Update table statistics after calling Truncate table or it gets updated automatically?
Q: Do we need to call "UPDATE STATISTICS" after truncating a table?
...
Setting up a new database that has a comments table. I've been told to expect this table to get extremely large. I'm wondering if there is any particular reason why I wouldn't want to keep this table in the same database as the rest of the data for the site.
...