Home  5  Books  5  GBEzine  5  News  5  HelpDesk  5  Register  5  GreenBuilding.co.uk
Not signed in (Sign In)

Categories



Green Building Bible, Fourth Edition
Green Building Bible, fourth edition (both books)
These two books are the perfect starting place to help you get to grips with one of the most vitally important aspects of our society - our homes and living environment.

PLEASE NOTE: A download link for Volume 1 will be sent to you by email and Volume 2 will be sent to you by post as a book.

Buy individually or both books together. Delivery is free!


powered by Surfing Waves




Vanilla 1.0.3 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to new Forum Visitors
Join the forum now and benefit from discussions with thousands of other green building fans and discounts on Green Building Press publications: Apply now.




    •  
      CommentAuthorSteamyTea
    • CommentTimeSep 24th 2013 edited
     
    Have eventually got around to making my Pi and CC device work with the real time clock.

    How do I get it to start when I boot the RPi up. Have tried adding python then the path to my script into rc.local and crontab, neither did it. Any ideas anyone?
    • CommentAuthorEd Davies
    • CommentTimeSep 25th 2013
     
    That's a 500 MB .zip you uploaded in June. Really? Sorry - can't be bothered to spend half an hour on this flaky Internet connection to download that just for a quick look.

    Don't you say in /etc/init or /etc/init.d what services you want started and stopped at different run levels? Sorry, I've never understood that stuff in any more detail.
    •  
      CommentAuthorSteamyTea
    • CommentTimeSep 25th 2013
     
    That upload was an image of what I was using, one of the reasons I wanted to start to write my own stuff. Half the time those large downloads get corrupted anyway.

    Not looked at init or init.d, shall see what I can find.
    Thanks
  1.  
    rc.local is the traditional quick and dirty way for any Unix system, and should work fine. Can you see any error messages from your command? If viewing the messages at boot-up is a problem you could put them into a temporary file and view later. For example, in rc.local:

    your_command_and_options_go_here > /var/tmp/output.txt 2>&1

    That would redirect the output to /var/tmp/output.txt which you could then examine at your leisure.

    cron usually reports errors via e-mail. Do you see any non-empty files under /var/mail or /var/spool/mail ?
    •  
      CommentAuthorSteamyTea
    • CommentTimeSep 25th 2013 edited
     
    Ed
    Had a look at the init file and it seems I have to write a script to make things work there. Is that right.

    Andrew
    The rc.local file on the RPi looks like this;

    #!/bin/sh -e
    #
    # rc.local
    #
    # This script is executed at the end of each multiuser runlevel.
    # Make sure that the script will "exit 0" on success or any other
    # value on error.


    #
    In order to enable or siable this script just change the execution
    # bits.
    #
    # By default this script does nothing.

    # Print the IP address
    _IP=$(hostname -I || true
    if [ "$_IP" ]; then
    printf "My IP address is %s\n" "$_IP"
    fi

    exit 0

    Do I have to change the very first line at the top to the same as a Python program so it knows that to use (think it is called the shebang)

    Then do I just put in the path to my python script and finish with the exit 0

    As the python script is reading the USB port does that have to be open first?
    • CommentAuthorEd Davies
    • CommentTimeSep 25th 2013
     
    Posted By: SteamyTeaEd
    Had a look at the init file and it seems I have to write a script to make things work there. Is that right.
    On the previous page you talked about commands like “service serial start” and “service serial stop” so I assume such a script already exists - it just needs to be linked to somehow. Not sure as I know very little and it's distribution dependent but my first guess would be that your script is called /etc/init.d/serial and that to have it started and stopped automatically you need a symbolic link to it from /etc/rc5.d or some such. Really, asking about this on a forum which knows about your particular Linux distribution would be much better than here.

    rc.local seems like another option but a quick Google shows various reasons why various things are best not put there including starting and stopping services but not sure if the reasons matter to you.
    •  
      CommentAuthorSteamyTea
    • CommentTimeOct 2nd 2013
     
    Right, after several hours of fiddling over the last few days I have finally got my python script to run at boot.

    Seems that the easy way is to use 'Upstart' which is not installed on the 'wheezy' distribution for the Raspberry Pi.

    Then you make a xxxx.conf file that is kept in the /etc/init directory.

    All very frustrating when it is not working but quite easy when it is.

    Now to buy some more realtime clocks and no more worrying about loosing power.
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 21st 2013
     
    Posted By: MackersVery interesting, once I get it all sorted ill be on looking help. I think ill add a few sensors in the rooms in the house to log temperatures.

    Seems a great piece of kit.

    Really want to create a nice graphical display for showing tank temps and graphs. How easy is that?


    I use the Dallas 1-wire temperature sensors:

    http://www.homechip.com/catalog/product_info.php?cPath=26&products_id=99

    They run on the same bus, so only need to use 1 of the GPIO pins (general purpose in out) to transfer the data.
    A small Python script I got from here:

    http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/temperature/

    I have not bothered to graph any data as I do this in a spreadsheet but there are packages for Linux that allow you to do this.
    GNUPlot is one.

    http://www.gnuplot.info/

    For monitoring the electrical side I use a CurrentCost Envi as it is reasonably cheap, can monitor the power in up to 10 circuits (9 current clamps and one optical), does temperature and is easy to interface.

    http://www.currentcost.com/product-cc128.html

    The light level monitor I used this:

    http://www.raspberrypi-spy.co.uk/2012/08/reading-analogue-sensors-with-one-gpio-pin/

    What I have not got around to doing is writing some code to correct errors (you get a lot when reading hardware), using all three scripts together, plotting the output and publishing to as website. There are reasons that I do not publish to the web (basically I am not allowed to as it is part of my research project), but I may give it a go with my domestic usage, there are plenty of tutorials that show how to do it.

    The biggest headache I got was getting scripts to run at startup, but there is a programme called 'UpStart' that makes it easy, just took a while to find out about it and how to set it up.

    The other thing that is useful is a 'Real Time Clock' as the RPi does not have one, I got these ones:

    http://www.cjemicros.co.uk/micros/individual/newprodpages/prodinfo.php?prodcode=4D-RaspberryPi-RealTimeClock-RTC

    This allows for a power loss and restart without loosing the time.

    There are probably better and cheaper bits of kit but I stuck with what I knew, or suppliers I had dealt with before (I ordered some transmitters and 433MHz receivers but they never turned up, but got my money back though Amazon.

    The Python side I find quite frustrating, as a simple mistyped comma or full stop can cause things to stop and it can take days to find the problem. I downloaded and installed an editor called GEANY, it works well on the RPI.

    The RPI has a simple way to install programmes from the command prompt, takes a little getting used to, but is really simple (apt-get install 'programme name').
    One area that causes problems is the 'rights' that users have. I tend to do everything as the 'root' , but if I was going to connect to the internet then I would look into the security side a bit more carefully.

    The hardest thing I found was understanding that in Linux 'everything is a file'. So any hardware you add just shows up as a file somewhere in the Linux file structure. The RPi has a very simple file structure unlike fully blown Linux systems that seem to have all sorts of partitions, directories and files, all very confusing for a Windows/DOS user.

    Lots more to experiment with, making a 'flow' meter would be useful.
    • CommentAuthorborpin
    • CommentTimeDec 21st 2013
     
    I am looking at using the OpenEnergyMonitor stuff as it is a bit more of a ready to use system but I'm interested in what you have done.

    On starting python scripts at startup try http://blog.scphillips.com/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/

    I am using a package called 'monit (apt get install monit) which can be used to monitor 'services' (or daemons if you like) to restart them if they crash.

    The thing to remember is that the main pi OS is based on Debian so if you are looking for info, include that (rather than pi) in the search query.
    • CommentAuthorEd Davies
    • CommentTimeDec 21st 2013
     
    Posted By: SteamyTeaThe RPi has a very simple file structure unlike fully blown Linux systems that seem to have all sorts of partitions, directories and files, …
    AFAIK, Debian on a RPi has the same sort of file system as any other Linux system; it might be missing the odd corner but the basic structure is all there.

    Linux has no particular need for more or fewer partitions than a Windows system; the biggest difference is that Windows (and DOS) treats each partition as a separate drive with different letters and their own separate trees of directories whereas Linux has a single directory tree which spans across the partitions. Thinking about where the subtrees on the various partitions sit in the overall tree is a bit of a jump but once it's all set up you don't have to worry about it. Of course, many people's first encounter with Linux is on a dual-boot system where the relationship between Windows and Linux partitions can be a bit tricky - somebody who had been using Linux then tried to add WIndows would have as much of a puzzle setting up the partitions (actually, probably more as Windows is a bit ruder about overwriting other system's boot information).
    • CommentAuthorborpin
    • CommentTimeDec 21st 2013
     
    2 other things;

    If you are logging data you need to use a HDD as your SD Card will fail due to multiple writes.

    If you are developing code, use a personal Git repository to store the code and manage the changes. It also acts as a backup should your SD card fail.
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 21st 2013
     
    Yes I keep meaning to have a good look at what they are doing there. When I started it was not up and running. They also seem to have wireless connectivity working (why I bought 5 433MHz transmitters/receivers that did not arrive even after several emails, promises that they were sent/resent, offers of half my money back as long as I removed my negative feedback first, as if).

    The stuff I have done was not hard and is proving to be reliable and runs on an SD card (the OEM recommends an HD).

    I shall have a look at 'monit' as I have had one crash that went unnoticed and I lost a few hours data.

    One thing I like about Linux is that different people have different solutions and you can pick and choose what works best for your setup.
    And it is a helpful community.
    • CommentAuthorEd Davies
    • CommentTimeDec 22nd 2013 edited
     
    Posted By: borpin in another threadYou really need to have the main OS on a small HDD.
    Just the log files need to be on an HDD, surely? And the swap partition, if you have one.

    I did some logging of the output of my little solar panels the other winter (2011/12) mostly at 10 second intervals. I was actually logging to a hard disk (1TB USB - a bit OTT for the logging but useful for other purposes), as it happens, but still I had my code buffer quite big chunks of data to allow the disk to spin down between writes for power saving. I imagine that would help a lot too with SD cards.

    PS: yes to using Git as well. Personally, I use Mercurial, but that was probably a Betamax-type mistake - I changed from my previous version control system (Subversion) just before it became clear that the dominance of Github was swinging things in Git's favour. I happen to think that Git is well named - you can do more directly in it than with Mercurial but it's all a lot more complicated and booby-trapped. Still, going with the standard is good even if it's a little harder to use.

    The big advantage of any version control system, I find, is that you don't worry about deleting code. Before I used to keep copies of files around “just in case” I would later want some routine I've for the moment decided I don't need. Knowing you can get it back from an old revision relieves that worry and the ensuing file clutter. And 99% of the time you don't want it anyway.

    One of the advantages of that 1TB USB drive was that it could store separate Mercurial repositories from the ones on my laptop and netbook giving an immediate first level of backup. As only changed files are copied it's very quick to have a backup on a separate machine. It, and the Pogoplug it was plugged into, are in storage at the moment and it feels a bit more precarious waiting to backup checkins until I do it to a USB stick or SD card every couple of days.
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 22nd 2013
     
    Or copy the data to the cloud or a back up server if you are running one anyway. I have a network hardrive (think that and the router are my major baseload energy users) and I could write to that (when I find out how to do it).

    I wonder if it is possible to boot an RPi from a network, that would be pretty neat.

    I am running cheap 4GB cards and don't think I have had one fail. Been running them for 12 months now but none continuously. The longest continuous running is from the 2/10/2013, but was not a new card.

    Is there a way to log or view the read/writes to the SD card?
    • CommentAuthorSprocket
    • CommentTimeDec 22nd 2013
     
    Modern SD cards do wear levelling so they will wear evenly and still take quite some time to wear out.

    They tend not to wear out suddenly. Usually all that happens is that writes start to take longer and longer… eventually becoming so slow that the SD card times-out.

    Unless you have configured your filesystem drivers to write-through they will be set to write-back. How many writes happen will depend on how long it buffers data before forcing a write. I don't know what the default settings are though.

    I was using v.cheap SD cards initially.
    But then I discovered how much quicker and more responsive things are if you use a class-10 card, especially if you boot to desktop rather than just a terminal.
    • CommentAuthorborpin
    • CommentTimeDec 22nd 2013
     
    The problem with deciding what to write to the HDD is that you miss things. If a particular log file is writing more than you anticipate it can ruin the SD Card. I had one go on me; only using it for a short while for weather station logging so went straight onto a HDD.
    Posted By: SteamyTeaOr copy the data to the cloud or a back up server if you are running one anyway. I have a network hardrive (think that and the router are my major baseload energy users) and I could write to that (when I find out how to do it).
    The advantage of Git over say Dropbox, is the inbuilt version control so you do not need to keep loads of versions manually.
    • CommentAuthorSeret
    • CommentTimeDec 23rd 2013 edited
     
    Posted By: SprocketModern SD cards do wear levelling


    Do they? Is that logic in the card itself? I thought SD cards were pretty dumb.
    • CommentAuthorSprocket
    • CommentTimeDec 23rd 2013 edited
     
    Yes, it is down to the card itself.

    The writeable blocks are in larger chunks that are bulk (the whole chunk) erasable very quickly/efficiently. This means it is efficient to collect unused blocks into physical groups that can be bulk erased together... and this requires some management of exactly where data is put... and some copying of data behind the scenes to create whole chunks of unused blocks.

    This garbage collection creates free already-erased chunks ready to be allocated and written but it requires SD cards to be smarter than when the SD standard was originally created. The benefits are much faster write times as whole erased blocks are available. It is much like the management done by SSD controllers but typically SSD controllers are also doing this across 4 or 8 lanes, trying to create sets of free chunks that can be written in parallel. The reason is the same, it is about addressing the main weakness of nand flash ie. write bandwidth.

    It can work a whole lot better if the OS is aware of the media... and the OS can tell the media which block have been freed and therefore are mergeable and may be likely to be re-written so that they can be moved between chunks and (when a whole load of unused blocks are collected into one chunk, they can be erased together)... this is "Trim" support present in modern OSes.
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 23rd 2013
     
    Debian 'Wheezy' has Trim support, though not sure of the cutdown version for the RPi has it.
    • CommentAuthorborpin
    • CommentTimeDec 26th 2013
     
    Posted By: Seret
    Posted By: SprocketModern SD cards do wear levelling
    Do they? Is that logic in the card itself? I thought SD cards were pretty dumb.
    Plenty of evidence that it makes no difference. Card failure is pretty much assured if you use it for some form of logging software. The time to fail varies, but fail they will.
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 26th 2013
     
    I find that a bit worrying, but I should be able to gather some data over the years. I think I have had one go, but a reformat seemed to make it work again, so may have been something else.

    This months IET comic had a bit on it.
    • CommentAuthorSeret
    • CommentTimeDec 27th 2013
     
    Posted By: borpin
    Posted By: Seret
    Posted By: SprocketModern SD cards do wear levelling
    Do they? Is that logic in the card itself? I thought SD cards were pretty dumb.
    Plenty of evidence that it makes no difference. Card failure is pretty much assured if you use it for some form of logging software. The time to fail varies, but fail they will.


    Well yes, but you can say that for any storage medium. I don't trust hard drives as far as I can chuck 'em either, especially the spinny kind.
    • CommentAuthorborpin
    • CommentTimeDec 27th 2013
     
    Posted By: Seret
    Posted By: borpin
    Posted By: Seret
    Posted By: SprocketModern SD cards do wear levelling
    Do they? Is that logic in the card itself? I thought SD cards were pretty dumb.
    Plenty of evidence that it makes no difference. Card failure is pretty much assured if you use it for some form of logging software. The time to fail varies, but fail they will.
    Well yes, but you can say that for any storage medium. I don't trust hard drives as far as I can chuck 'em either, especially the spinny kind.
    Of course but the MTBF of the spinny kind is far greater than flash memory of any kind. If you are that worried you have a NAS with at least 2 different disks (age and make) in a RAID configuration automatically backed up to an external hard disk so when the house is on fire the HD can just be grabbed and run out the door with :bigsmile:

    The point is if you only keep your data in one place then you increase the risk of losing it. You then look at the risk and try and mitigate the likelihood/impact to a level you find acceptable. Your risk appetite for different data sets will probably be different.
    • CommentAuthorSprocket
    • CommentTimeDec 27th 2013
     
    > Plenty of evidence that it makes no difference

    It makes a huge difference. To both write performance and to longevity.
    With wear levelling the writes are spread over the whole drive. It will still fail after a lot of writes but that number of writes will be many times greater than if wear levelling were not being used.

    Wear levelling works better (faster) if there is a fair bit of disk unused (or reserved for wear levelling) but only if the media supports trim (most SD cards don't but many SSDs do) and if the OS uses it. But even without TRIM it makes a huge difference to lifespan.

    But yes, it will still wear out eventually. The best way to avoid this is just to buffer your writes so they are only flushed when there is a whole block to write or when the program exits. This is pretty easy to do.

    If you write eg. multiple lines of text at intervals and
    eg. the OS flushes it every time it sees a newline

    then each line written to disk will cause the media to
    read the block that will be modified
    allocate another free block
    erase that block
    write the new block contents

    If the media is wearing out then those last two steps become slower and slower.
    Depending on your filesystem type it may cause exactly the same access for a directory block also.

    A real world worked example of the simple dumb wear levelling as done by most SD cards and USB sticks.
    Take an 2GB SD card with a 1K block size (pretty typical for smaller SDHC cards). That is 2M blocks.
    Lets say we append 16 bytes of data to a file once per second.

    If we do not buffer whole blocks then a new block will be allocated, erased, written for each write we do.
    If we are using FAT32 then the same may also happen to a directory block each time.
    Lets ignore the FAT writes because they are much less frequent (one every 32 writes in this case).
    So that is 2 new blocks written every second.
    So after 1 million writes we will have written to every block on the SD card just once. That is about 11 days.
    Good quality MLC nand flash typically has a write endurance of 10,000. That is about 300 years.
    Even if you are using cheap low-quality nand flash (maybe 1000 writes) that is still 30 years.
    Even if your OS does 10x writes you don't know about in the background that is still 3 years on the same very cheap media.

    Not many HDDs will last 30 years. And in this scenario they consume more power than the Pi too.
    And bear in mind that a 16GB SD card will last 8x as long as a 2GB one in this scenario.

    But if you buffer your 16-byte writes to the block size you will only perform a physical media write every 64 appends to the log. ie. in the same scenarios as above you lifespans are 64x as long.

    Yes, I have seen cheap SD cards fail after a few weeks use. I have also seen plenty of cheap HDDs fail in the first year. And if you re-write even a decent SD card flat out, maybe 1000 writes a second, you can kill it in half an hour whereas a HDD may well survive that treatment.
    • CommentAuthorborpin
    • CommentTimeDec 27th 2013
     
    @sprocket I love the theory but experience of myself and others is that SD Cards fail more regularly than that. PYWWS is a weather station logging software, writes about once every 5 min and a card can go in a few months. I lost a not so cheap Kingston card using it.

    So, my advice for anyone who does not want to delve into the depths of Linux File systems, buffering data etc and just use some of the prebuilt systems out there on a pi, is use a HDD. Emoncms do not even offer a 'normal' prebuilt SD card image now for just this reason. All the cards are either read-only or HDD based.

    YMMV
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 27th 2013
     
    My kit is going it shed and the likes and a HDD is not really viable (can't justify the expense either).

    So as I write the data to a text file every 6 seconds, could I write a bit more Python Script to to buffer the data before I write it to the file stored on the SD card? Or maybe to two SD Card on a card reader? (just thought of that).
    • CommentAuthorSprocket
    • CommentTimeDec 28th 2013
     
    The durability of NAND flash is very well understood and is relied upon by quite a lot of commercial products. I expect the problems with emoncms and pywws running on SD card are software related rather than hardware.

    Things like using a database rather than just writing to disk can generate a lot of extra writes. Use of library code from Apache etc can also generate a lot of extra logging write activity. As does use of a journaling filesystem (ext3/ext4 filesystems log disk activity at intervals to allow it to unwind disk changes. This is very bad for flash memory).

    But you do have a point - if you are using a complex OS and libraries like Linux and SQL and apache you may well not know what is going on behind the scenes and it may not be easy to find out. There could be a *LOT* more writes than you realise.

    There have been plenty of problems with HDDs on Pi too... mainly around shared power supplies and the Pi itself becoming unstable and marg8inal behaviour in these conditions causing HDD corruption. Though if you can find a reliable HDD+usb-hub+PSU config then it at least ought to stay reliable.

    ST, it is hard to say without monitoring actual disk writes in your application. I do know that the standard C library "fwrite" function does buffering by default which would reduce the writes to buffer size chunks and you can set the buffer size using setvbuf.
    It would certainly be sensible to make the boot filesystem read-only so you don't accidentally trash anything critical if the drive starts to wear. Putting log data onto a separate dedicated writeable card could help too... but as Borpin suggests, you are then counting on a fair bit of knowledge of how underlying software parts work and having to make a non-standard setup. That's a bit of a faff and if you get it wrong you can easily trash an SD card.

    RPIs are great to play with but are far to prone to trash their filesystem on power failures etc for my liking for jobs like this. Have you considered a simpler system - something that does not do extra writes behind the scenes... eg. an Arduino?
    • CommentAuthorMackers
    • CommentTimeDec 28th 2013
     
    Been doing some searching around about the RaspberryPi and seems a brilliant piece of kit.
    I`m going to order one in the next week or so and get playing.

    Loads of people have various bits of automation, sensoring ect but I really want to develop my own bit with a nice graphical front end.

    Maybe we can all contribute in some way? ST?
    •  
      CommentAuthorSteamyTea
    • CommentTimeDec 28th 2013
     
    I will accept any help I can get :bigsmile:
    • CommentAuthorSeret
    • CommentTimeDec 29th 2013
     
    Posted By: borpinOf course but the MTBF of the spinny kind is far greater than flash memory of any kind.


    "Any flash" is a bit broad, you can't lump all flash memory in together. An SD card and a proper SSD are very different animals. Secondly, SSDs are actually more reliable than magnetic drives. Magnetic drives aren't at all reliable, which is hardly surprising considering they're mass-produced devices with tiny moving parts working at high speed to some pretty insane tolerances. The fact they work at all is an engineering marvel.

    If you are that worried you have a NAS with at least 2 different disks (age and make) in a RAID configuration automatically backed up to an external hard disk so when the house is on fire the HD can just be grabbed and run out the door withhttp:///forum114/extensions/Vanillacons/smilies/standard/bigsmile.gif" alt=":bigsmile:" title=":bigsmile:" >


    Indeed, we don't keep any data on local machines any more, it's much easier to protect it if it's kept centrally.
   
The Ecobuilding Buzz
Site Map    |   Home    |   View Cart    |   Pressroom   |   Business   |   Links   
Logout    

© Green Building Press