I have a database that has over 10,000,000 rows, querying it right now can take a few seconds just to find some basic information. This isn't preferable, I know that the best way to optimize is to minimize the number of rows which is possible but right now I don't have the time to do this.
What's the easiest way to optimize a mysql dat...
Is there any better way to make this query work? I'm looking for a more efficient solution, if there is one available.
SELECT `unitid`, `name` FROM apartmentunits WHERE aptid IN (
SELECT `aptid` FROM rentconditionsmap WHERE rentcondid = 4 AND condnum = 1
) AND aptid IN (
SELECT `aptid` FROM rentconditionsmap WHERE rentcondid = 2...
Hello,
I have a database table that is growing too big (few hundred million rows) that needs to be optimized, but before I get into partitioning it, I thought I'd ask about suggestions.
Here is the usage:
0 . Table contains about 10 columns of length about 20 bytes each.
INSERTS are performed at a rate of hundreds of times per secon...
$offset = SELECT FLOOR(RAND() * COUNT(*)) FROM t_table
SELECT * FROM t_table WHERE LIMIT $offset,1
This works great in myisam but i would like to change this table to innodb (all other db tables are innodb) to take advantages of foreign-keys and avoid table level locking.
The primaryId field of this table is a VARCHAR(10)
I can't "fo...
We are working on a video processing application using EmguCV and recently had to do some pixel level operation. I initially wrote the loops to go across all the pixels in the image as follows:
for (int j = 0; j < Img.Width; j++ )
{
for (int i = 0; i < Img.Height; i++)
{
// Pixel operation code
}
}
The time to exec...
Hi All.
I need to optimize a MYSQL query doing an order by. No matter what I do, mysql ends up doing a filesort instead of using the index.
Here's my table ddl... (Yes, In this case the DAYSTAMP and TIMESTAMP columns are exactly the same).
CREATE TABLE DB_PROBE.TBL_PROBE_DAILY ( DAYSTAMP date NOT NULL, TIMESTAMP date NOT NULL, SOURCE_...
I'm writing a sparse matrix solver using the Gauss-Seidel method. By profiling, I've determined that about half of my program's time is spent inside the solver. The performance-critical part is as follows:
size_t ic = d_ny + 1, iw = d_ny, ie = d_ny + 2, is = 1, in = 2 * d_ny + 1;
for (size_t y = 1; y < d_ny - 1; ++y) {
for (size_t x...
Hi, just trying to optimize my code.
I need to prefill a form with DB data and I need to check if the variable exist to fill the text box (I dont like the @ error hiding).
The form is really long, then I need to check multiple times if the variables exist.
Then what is faster:
if (isset ($item))
if ($item_exists==true)
or even
if ...
I m contributing to a project that s been developed for over a year and I jumped into it in lately.
There are performance problems with the application.
I m aware of several profiling tools, approaches for optimization such as find out the bottlenecks and optimize them. I m comfortable with algorithms, running times of iterations etc....
I can examine the optimization using profiler, size of the executable file and time to take for the execution.
I can get the result of the optimization.
But I have these questions,
How to get the optimized C code.
Which algorithm or method used by C to optimize a code.
Thanks in advance.
...
I'm teaching JEE, especially JPA, Spring and Spring MVC. As I have not so much experience in large projects, it is difficult to know what to present to students about optimisation of ORM.
At the present time, I present some classic optimisation tricks:
prepared statements (most of ORM implicitely use them by default)
first and second-...
Related to my other question, I have now modified the sparse matrix solver to use the SOR (Successive Over-Relaxation) method. The code is now as follows:
void SORSolver::step() {
float const omega = 1.0f;
float const
*b = &d_b(1, 1),
*w = &d_w(1, 1), *e = &d_e(1, 1), *s = &d_s(1, 1), *n = &d_n(1, 1),
*xw...
After my last, failed, attempt at asking a question here I'm trying a more precise question this time:
What I have:
A huge dataset (finite, but I wasted days of multi-core processing time to compute it before...) of ISet<Point>.
A list of input values between 0 to 2n, n17
What I need:
3) A table of [1], [2] where I map every value ...
Is there a way to make the following code faster? It's becoming too slow, when length of array is more than 1000 records, especially in IE6.
dbusers = data.split(";");
$("#users").html("");
for (i = 0; i < dbusers.length; i++) {
if ($("#username").val() != "") {
if (dbusers[i].indexOf($("#username").val()) != -1) {
...
When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically.
I want to know, Is it has anyway to reduce this waiting time?
Hardwa...
I have a site in multiple languages. I have a method that returns me the today currencies in a array. I display that currencies in a table then.
// --- en/index.php
<?php
include_once "../exchangeRates.php";
$currencies = ReadExchangeRates();
// --- fr/index.php
<?php
include_once "../exchangeRates.php";
$currencies = ReadExchangeRates...
I have an array of integers, lets assume they are of type int64_t. Now, I know that only every first n bits of every integer are meaningful (that is, I know that they are limited by some bounds).
What is the most efficient way to convert the array in the way that all unnecessary space is removed (i.e. I have the first integer at a[0], ...
Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input.
This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe).
Do ...
When I read about the performance of JITted languages like C# or Java, authors usually say that they should/could theoretically outperform many native-compiled applications. The theory being that native applications are usually just compiled for a processor family (like x86), so the compiler cannot make certain optimizations as they may...
I have an old structure class like this: typedef vector<vector<string>> VARTYPE_T; which works as a single variable. This variable can hold from one value over a list to data like a table. Most values are long,double, string or double [3] for coordinates (x,y,z). I just convert them as needed. The variables
are managed in a map like this...