views:

43

answers:

1

I have found that reading 64 contiguous memory locations (elements) using modbus is the most efficient way of retrieving information over Modbus using the rmodbus library.

My goal is to log the read information in a simple database that can be mined to generate graphs and tables of data on a webpage and store the most current value in an instance variable.

The data is read into an array by the rmodbus library where each element's index represents the element's address. However, I would like to convert the index to octal as that corresponds with the element addressing scheme the users are already familiar with and would be easier to reference in the interface.

edit adding details and refinement: At this point in time, I am working with the following schema:

create_table "elements", :force => true do |t|
    t.string   "name"
    t.integer  "modbus_connection_id"
    t.string   "address"
    t.string   "eng_unit"
    t.integer  "base"
    t.string   "wiring"
    t.text     "note"
    t.boolean  "log"
    t.datetime "created_at"
    t.datetime "updated_at"
  end

  create_table "events", :force => true do |t|
    t.integer  "element_id"
    t.string   "value"
    t.datetime "created_at"
    t.datetime "updated_at"
  end

  create_table "modbus_connections", :force => true do |t|
    t.string   "name"
    t.string   "ip_address"
    t.integer  "port"
    t.integer  "client"
    t.text     "note"
    t.datetime "created_at"
    t.datetime "updated_at"
  end

The idea is going to be that a background process will probably poll over modbus and compare against itself for changes before logging only Elements that have changed and have been asked to be logged. Elements should probably be stored both in db and variables already scaled so the frontend doesn't have to worry about it. The ones that aren't logged are still held in instance variables for semi-realtime heads up display type monitoring. Then the logged elements in their Events table will be parsed for graphs and tables only when requested by the UI.

First Question: (Finally!) Does it make more sense to live with the data in an array and apply a layer that handles converting the index, (and as it happens the corresponding element value as well which I use v.collect{|i| i.to_s(16)} to convert) or is it better to transfer everything into a Hash where the index and value can live happily ever after in their most usable form?

First Question edit: Given the determination/evolution of my question toward logging only changes in the data into a simple sqlite db, and that I will need to track changes for the elements to determine which ones changed between modbus reads, does the array or hash do comparison more efficiently? Should I even care?

Second Question: In Rails, assuming a one minute logging interval, will the approximately one thousand data points be better held in independent fields, or should I leave them in 64 element chunks and parse the information on its way to the interface?

Second Question edit: Running copious amounts of unchanged data into a one minute 'row' of a database seems very flat. Plus, it doesn't allow for easy dynamic selection of elements to be logged. It would seem far more appropriate to make the "logger" event based rather than interval based. Which pretty well means the First Question is the more important one here, as it will likely become the state checking mechanism as well.

I'm guessing I'm unnecessarily re-inventing a wheel with that revelation as this is becoming a lot like existing 'loggers.' Reading around SO reveals that it is an age old question as to logging into a DB vs. the FS. As the log itself is the foundation of the app, I am inclined to log into a DB, most likely sqlite given what I've read.

Second Question edited again: Now it's a question of normalization, everything I'm reading suggests that "scalability" tends to require denormalization. My logged "Events" table will be relatively simple, timestamps, the value, and the element id. Should it also denormalize the most common attribute(s) from inside the Elements table or is a join ok at this relatively small scale?

Anybody have any favorite Ruby logging frameworks/gems/bundles/plugins/whatever?

+1  A: 

I don't think this question has been answered elsewhere :) You may be the only person on SO using Rails and Modbus. I have Rails and Modbus experience, except that my Modbus experience is with nmodbus on the .net compact framework. I don't know that I have any definite answers for you but I can share the approach I used.

When we poll the device we immediately apply any parsing, scaling or conversion to the data (but we do not have 1000 values). The data is then logged into the database. Now any clients that want to use the database don't have any idea about modbus and they don't care; the problem moves from knowing modbus to what the application really cares about (voltages!). In your scenario I would try to completely decouple this polling application from your Rails app.

Now why decoupling might not be possible - 1000 data points. That's a problem. For arguments sake even if you were able to normalize this data into 50 tables, that's 50 tables with 20 columns each... yuck. I don't know of an easy way to log 1000 data points, but building what is basically a persistent hash table might not work out.

The bad part about requiring Rails to know how parse the register values is that now your Rails application has knowledge of Modbus (not at the reading registers level, but at the parsing level). Also if you wanted to use a client other than Rails, that application would also need parsing knowledge. Maybe that's the whole point of your Rails application though? The Rails application knows how to slice and dice the Modbus readings and gives users/clients a nice UI/web services to work with.

These are good questions, but it's difficult to give specific advice without having more knowledge about what it is that you're building. As for normalization - trial and error. Give it a shot both ways. I can't say things like "if you have more than 40 columns in your sqlite table it's going to fall apart" - stuff like that you just have to run into...

Andy Gaskell
Very interesting. Your thoughts are parallel to mine as mine evolve. The question of 1000 data points becomes much less daunting when I consider only logging the changes rather than redundant values. I probably wasn't clear about that, as I have, like you mentioned, considered the decoupling of the gathering and storage to be imperative. There will only be, at the absolute most, a hundred values that might change in a one second period, storing them in a log style should be easy. Leave it to Rails to parse for display where speed isn't too important. Glad I'm not completely alone! Thanks.
arriflex