large-data

Display large data on web site

Hello I have some tables in my database with about 7K rows and I need to make a report on the web site( asp.net ) with custom formating, pivot table like details. What is the best solution for this, because when I render for example repeater with this amount of data it`s very slow. Thanks for advice ...

Command line script or software tools to label 3d point cloud dataset

How can i label a 3d point cloud dataset? is there a software which can load a text file containing x,y,z values and then visualize it , so that I can label it ? ...

Fast data retrieval in MySQL

I have a table of users - It contains around millions of rows (user-id is the primary key). I just want to retrieve user-id and their joining date. Using SELECT user-id, joining-date FROM users requires lot of time. Is there a fast way to query/retrieve the same data from this table? ...

How to send large objects using boost::asio

Good day. I'm receiving a large objects via the net using boost::asio. And I have a code: for (int i = 1; i <= num_packets; i++) boost::asio::async_read(socket_, boost::asio::buffer(Obj + packet_size * (i - 1), packet_size), boost::bind(...)); Where My_Class * Obj. I'm in doubt if that approach possible (because i have a pointer t...

Sql Server - Merging large tables without locking the data

I have a very large set of data (~3 million records) which needs to be merged with updates and new records on a daily schedule. I have a sproc that actually breaks up the recordset into 1000 record chunks and uses the MERGE command with temp tables in an attempt to avoid locking the live table while the data is updating. The problem is...

Scalable, fast, text file backed database engine?

I am dealing with large amounts of scientific data that are stored in tab separated .tsv files. The typical operations to be performed are reading several large files, filtering out only certain columns/rows, joining with other sources of data, adding calculated values and writing the result as another .tsv. The plain text is used for i...

Updating large datasets in Hibernate

I have a standalone app developed in Spring and Hibernate. I need to update a pretty large dataset and right now the speed of the update makes it unusable. Looking for options to implement this more efficiently. And I realize Hibernate isnt the tool to handle large batch updating but I need to make it work for now. There are 3 tables...

How to improve display and handling of a large number of inlines in django-admin?

When displaying an inline for a model, if there's a large number of inlines, the change page then loads slowly, and can be hard to navigate through all of them. I'm already using an inline-collapsing trick (found on DjangoSnippets, but the search there is not working so I can't share the link here), but still isn't easy to browse since t...

c# MemoryStream

I have an extreamely large 2D bytearray in memory. byte MyBA = new byte[int.MaxValue][10]; is there any way (probably unsafe) that I can fool c# into thinking this is one huge continuous bytearray? I want to do this such that I can pass it to a MemoryStream and then a BinaryReader. MyReader = new BinaryReader(MemoryStream(*MyBA)) //Sy...

Need help improving the performance of large datasets in grails.

This solution works but performance is lower than expected. A query returning 200K rows takes several minutes and pegs the CPU on my dev box. Running the same* query in query analyzer returns all results in < 1 minute. Class MyController { def index = {...} ... def csv = { ... def rs = DomainClass.createCritera().scroll {} ...

MySQL: Large table splitting

I have a huge table in a database and I want to split that into several parts physically, maintaining the database scheme. For example, the table name is TableName and has 2 000 000 rows. I would like to split that table into four parts, but I want to work in the same way with the table, so select [Column List] from TableName where [F...

Server side caching for redundant client data for Charting

The functionality that i have now enables me to generate charts(graphs/tables etc), from the XML data created by manipulating results extracted from the database. This works good when i have a limited amount of data. But as the data grows the time to generate charts increases substantially. Following is an example scenario that describ...

How to fill Gtk::TreeModelColumn with a large dataset without locking up the application

Hello, I need to fill in a large (maybe not so much - several thousands of entries) dataset to a Gtk::TreeModelColumn. How do I do that without locking up the application. Is it safe to put the processing into separate thread? What parts of the application do I have to protect with a lock then? Is it only the Gtk::TreemodelColumn class,...

How to update one table from another one without specifying column names?

I have two tables with identical structure and VERY LARGE number of fields (about 1000). I need to perform 2 operations 1) Insert from the second table all rows into the fist. Example: INSERT INTO [1607348182] SELECT * FROM _tmp_1607348182; 2) Update the first table from the second table but for update i can't found proper sql synt...

How to optimize operations on large (75,000 items) sets of booleans in Python?

There's this script called svnmerge.py that I'm trying to tweak and optimize a bit. I'm completely new to Python though, so it's not easy. The current problem seems to be related to a class called RevisionSet in the script. In essence what it does is create a large hashtable(?) of integer-keyed boolean values. In the worst case - one fo...

How to read large dataset in R

Possible Duplicate: Quickly reading very large tables as dataframes in R Hi, trying to read a large dataset in R the console displayed the follwing errors: data<-read.csv("UserDailyStats.csv", sep=",", header=T, na.strings="-", stringsAsFactors=FALSE) > data = data[complete.cases(data),] > dataset<-data.frame(user_id=as.char...