I read many files from my system. I want to read them faster, maybe like this:
results=[]
for file in open("filenames.txt").readlines():
results.append(open(file,"r").read())
I don't want to use threading. Any advice is appreciated.
the reason why i don't want to use threads is because it will make my code unreadable,i want to find so tricky way to make speed faster and code lesser,unstander easier
yesterday i have test another solution with multi-processing,it works bad,i don't know why, here is the code as follows:
def xml2db(file):
s=pq(open(file,"r").read())
dict={}
for field in g_fields:
dict[field]=s("field[@name='%s']"%field).text()
p=Product()
for k,v in dict.iteritems():
if v is None or v.strip()=="":
pass
else:
if hasattr(p,k):
setattr(p,k,v)
session.commit()
@cost_time
@statistics_db
def batch_xml2db():
from multiprocessing import Pool,Queue
p=Pool(5)
#q=Queue()
files=glob.glob(g_filter)
#for file in files:
# q.put(file)
def P():
while q.qsize()<>0:
xml2db(q.get())
p.map(xml2db,files)
p.join()