views:

683

answers:

7

The following does not work

one.py

import shared
shared.value = 'Hello'
raw_input('A cheap way to keep process alive..')

two.py

import shared
print shared.value

run on two command lines as:

>>python one.py
>>python two.py

(the second one gets an attribute error, rightly so).

Is there a way to accomplish this, that is, share a variable between two scripts?

+5  A: 

What you're trying to do here (store a shared state in a Python module over separate python interpreters) won't work.

A value in a module can be updated by one module and then read by another module, but this must be within the same Python interpreter. What you seem to be doing here is actually a sort of interprocess communication; this could be accomplished via socket communication between the two processes, but it is significantly less trivial than what you are expecting to have work here.

gab
+3  A: 

You need to store the variable in some sort of persistent file. There are several modules to do this, depending on your exact need.

The pickle and cPickle module can save and load most python objects to file.

The shelve module can store python objects in a dictionary-like structure (using pickle behind the scenes).

The dbm/bsddb/dbhash/gdm modules can store string variables in a dictionary-like structure.

The sqlite3 module can store data in a lightweight SQL database.

The biggest problem with most of these are that they are not synchronised across different processes - if one process reads a value while another is writing to the datastore then you may get incorrect data or data corruption. To get round this you will need to write your own file locking mechanism or use a full-blown database.

Dave Kirby
+1  A: 

Use text files or environnement variables. Since the two run separatly, you can't really do what you are trying to do.

David Brunelle
+5  A: 

You're not going to be able to do what you want without storing the information somewhere external to the two instances of the interpreter.
If it's just simple variables you want, you can easily dump a python dict to a file with the pickle module in script one and then re-load it in script two. Example:

one.py

import pickle

shared = {"Foo":"Bar", "Parrot":"Dead"}
fp = open("shared.pkl","w")
pickle.dump(shared, fp)

two.py

import pickle

fp = open("shared.pkl")
shared = pickle.load(fp)
print shared["Foo"]
Drewfer
A: 

In your example, the first script runs to completion, and then the second script runs. That means you need some sort of persistent state. Other answers have suggested using text files or Python's pickle module. Personally I am lazy, and I wouldn't use a text file when I could use pickle; why should I write a parser to parse my own text file format?

Instead of pickle you could also use the json module to store it as JSON. This might be preferable if you want to share the data to non-Python programs, as JSON is a simple and common standard. If your Python doesn't have json, get simplejson.

If your needs go beyond pickle or json -- say you actually want to have two Python programs executing at the same time and updating the persistent state variables in real time -- I suggest you use the SQLite database. Use an ORM to abstract the database away, and it's super easy. For SQLite and Python, I recommend Autumn ORM.

steveha
A: 

I'd advise that you use the multiprocessing module. You can't run two scripts from the commandline, but you can have two separate processes easily speak to each other.

From the doc's examples:

from multiprocessing import Process, Queue

def f(q):
    q.put([42, None, 'hello'])

if __name__ == '__main__':
    q = Queue()
    p = Process(target=f, args=(q,))
    p.start()
    print q.get()    # prints "[42, None, 'hello']"
    p.join()
UsAaR33
A: 

sudo apt-get install memcached python-memcache

one.py

import memcache
shared = memcache.Client(['127.0.0.1:11211'], debug=0)
shared.set('Value', 'Hello')

two.py

import memcache
shared = memcache.Client(['127.0.0.1:11211'], debug=0)    
print shared.get('Value')
ruben