Haxin Mainframes

A blog about stuff I do, find interesting, or want to blab about..

Using nVidia Drivers on a Thinkpad W530

Drivers

To get the nVidia Drivers working with a w530 laptop you must select the Discrete Video Driver in the BIOS.

Then run the following to install the ?latest? (using sudo apt-get install nvidia-current may be better) nvidia drivers:

sudo add-apt-repository -y ppa:ubuntu-x-swat/x-updates; sudo apt-get update; sudo apt-get install nvidia-331

The driver install should automatically create a new xorg.conf but if not you can run:

sudo nvidia-xconfig

Some people even chose to purge the previous integrated drivers:

sudo apt-get purge xserver-xorg-video-nouveau

Add blacklist it from the kernel (if there is no blacklist-nouveau.conf in /etc/modprobe.d/ you can just create it):

sudo vim /etc/modprobe.d/blacklist-nouveau.conf

Adding these lines:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

(A StackOverflow post also shows another way)[http://askubuntu.com/questions/451221/ubuntu-14-04-install-nvidia-driver] to disable/blacklist the driver:

sudo echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
sudo update-initramfs -u

This worked great until I had a system update. I rebooted and my laptop was flashing, trying to “startx” but failing over and over, could not even switch to a terminal via ctr-alt-f1

It sounds like this can be fixed by (rebuilding with every kernel update)[http://askubuntu.com/questions/536562/ubuntu-14-04-with-nvidia-driver-blank-screen-after-kernel-update]:

sudo apt-get install dkms build-essential linux-headers-generic linux-headers-`uname -r` linux-source

It will install dkms and the headers before you run the installer and it should give you a DKMS option during setup. DKMS will prevent the problem you are experiencing so you don’t have to re-install every kernel upgrade.

The issue is that every kernel upgrade the nVidia drivers are not rebuild/configured with the kernel upgrade..

GRUB

You will also need to update your boot loader, adding a boot option, nox2apic:

sudo vim /etc/default/grub

and add the “nox2apic” flag to the GRUB_CMDLINE_LINUX option, or in my case it was the GRUB_CMDLINE_LINUX_DEFAULT. I would look for the variable with the nosplash option and add it to that one, I am sure it would not hurt to add nox2apic to both if you do have both the GRUB_CMDLINE_LINUX and GRUB_CMDLINE_LINUX_DEFAULT options in ur grub config.

You will be changing something like: GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash” to: GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash nox2apic”

To update grub you will need to run:

sudo update-grub

This Site has a breakdown on why this option in needed.

Checking Out Sublime Text 3 Binary Hex Diff Versus Cracked Version

I saw a site Sublime Text 3 with hack/crack and I am always paranoid. Wondering what the actual difference was between the 3056 build and the crack I checked.

Running:

xxd sublime_text\ crack\ linux\ 64\ build\ 3065 c1.hex
xxd /opt/sublime_text/sublime_text c2.hex
diff c1.hex c2.hex

I saw that there was the following diff:

2111c2111
< 00083e0: f88e 00e8 b751 0700 3bc0 0f94 c084 c088  .....Q..;.......
---
> 00083e0: f88e 00e8 b751 0700 85c0 0f94 c084 c088  .....Q..........

Putting that into an online assembler/disassembler:

The original file:

.data:0x00000000    f8  clc
.data:0x00000001    8e00    mov    es,WORD PTR [rax]
.data:0x00000003    e8b7510700  call   func_000751bf
.data:0x00000008    3bc0    cmp    eax,eax
.data:0x0000000a    0f94c0  sete   al
.data:0x0000000d    84c0    test   al,al

The cracked file:

.data:0x00000000    f8  clc
.data:0x00000001    8e00    mov    es,WORD PTR [rax]
.data:0x00000003    e8b7510700  call   func_000751bf
.data:0x00000008    85c0    test   eax,eax
.data:0x0000000a    0f94c0  sete   al
.data:0x0000000d    84c0    test   al,al

Notice the only different is the cmp to test. According to an assembly reference I see that test is just a bitwise AND comparison so test eax,eax will always AND the same values always having the same result.

Looks like a safe hack to me..

NOTE: I pay for Sublime Text 3. It is amazing software and I am all for supporting the authors. This page just came up when searching for Sublime Text 3 and I was curious.

Openstack Rdo Centos Error

---

layout: post title: “Installing Openstack via RDO on CentOS 6.5” date: 2014-09-10 16:47 comments: true

categories: [Python, Openstack, Virtualization, CentOS, Work]

RDO Packstack failing to install Openstack with the latest CentOS

Today I needed to bring up a new “all in one” Openstack Virtual Machine

I now work at a company called Virtustream. Here I am currently working on projects that require to me to run a few virtual machines at a time all on my poor little W530 Thinkpad laptop. The strain (or most likely the heat from me being an idiot and working from bed when sick which did not allow my laptop to breath well probably) on my laptop caused my HardDrive to crash.

Went to get to work in the morning and was greeted by my favorite:

Cannot run anything, your filesystem is mounted read only, etc etc messages..

I have seen this a few times before. Replace my hard drive maybe 3 or 4 times already (I bought a laptop pad with ventilation in case I work from bed anymore). Anyhow I just upgraded to a SSD HD. I do not know how I ever worked without this thing.. It is such an upgrade. So much faster.

Anyway, so what this whole blog post is really about is when I went to install my IceHouse Openstack All In One via RedHat’s RDO PackStack, which is found at RDO PackStack Quick Install. I ran through the simple directions like usual but the install failed. I even tried disabling SELinux by editing: /etc/selinux/config and setting:

SELINUX=permissive

then restarting and installing. I tried it with a fresh VM then, re installed Cent OS 6.5. Nothing would work.

It kept dying at:

192.168.122.166_nova.pp:                             [ DONE ]
Applying 192.168.122.166_neutron.pp
192.168.122.166_neutron.pp:                       [ ERROR ]
Applying Puppet manifests                         [ ERROR ]

The logs shows the following in red:

Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Warning: Scope(Class[Neutron::Server]): The sql_connection parameter is deprecated, use database_connection instead.
Warning: Scope(Class[Neutron::Plugins::Ml2]): enable_security_group is deprecated. Security is managed by the firewall_drive value in ::neutron::agents::ml2::ovs.


Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/lib/ruby/site_ruby/1.8/puppet/type/package.rb:430:in `default')


Error: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0]
Error: /Stage[main]/Packstack::Neutron::Bridge/Exec[sysctl_refresh]/returns: change from notrun to 0 failed: sysctl -p /etc/sysctl.conf returned 255 instead of one of [0]

Among a bunch of normal colored notices..

After googling for a while I found that someone had posted a bug and fix!!

Openstack PackStack CentOS 6.5 Install Patch

Looking through the patch for the fix I see that they just needed to add a -e parameter to sysctl to ignore some new keys added to /etc/sysctl.conf in CentOS 6.5


Finally, the workaround. Well you do not need to re format/start form scratch like I did a few times. You can just go and edit:

vim /usr/share/openstack-puppet/modules/packstack/manifests/neutron/bridge.pp

And then change the line:

    command => 'sysctl -p /etc/sysctl.conf',

to:

    command => 'sysctl -e -p /etc/sysctl.conf',

Note the new -e param. This ignores unknown keys so that no error will be returned, crashing the packstack install.

Then to continue your packstack install, you need to re run packstack but with your “answer file” which contains all the passwords and info needed from the partially complete install you just tried to do:

Run this via:

packstack --answer-file=packstack-answers-20140910-111306.txt

NOTE: You answer file with have a different name/timestamp!

And after this your install should work! Good luck!

How to Record Live Video From PyCon

PyCon has so many awesome talks! The problem is that they are happening at the same time!

To deal with this I have a 2 part solution:

  • First I goto a page I want to watch and run this JS code I whipped up to grab the actual video URLS:

JS Code:

for (m in player_jwobject.config.modes) {
    console.log(player_jwobject.config.modes[m].type);
    if (player_jwobject.config.modes[m].config.levels) {
        for (l in player_jwobject.config.modes[m].config.levels) {
            console.log(" - " + player_jwobject.config.modes[m].config.levels[l].file)
        }
    }
};

Note that this code gives you both html5 and flash options for streaming video. And will look like this:

html5
 - http://50.16.83.230:8080/webcast-low.webm?q=1363375116439
 - http://50.16.83.230:8080/webcast-high.webm?q=1363375116439
flash
 - http://50.16.83.230:8080/webcast-low.flv?q=1363375116439
 - http://50.16.83.230:8080/webcast-high.flv?q=1363375116439
download
  • Chose and dump one of these streams. Either will probably work fine however I chose to low quality html5 stream:

Command to run (requires mplayer):

mplayer -dumpstream "http://50.16.83.230:8080/webcast-low.webm?q=1363375116439"\
 -dumpfile interpetermplayer

Or use ffmpeg:

ffmpeg -i http://50.16.160.194:8080/webcast-high.webm\?q\=1363381000621 pycon.webm

This will save the webm stream to a file called: interpetermplayer

Ubuntu(Kubuntu) Web Pages Load Painfully Slow

My new Laptop from work, I recently started a new job at a company called Virtustream, was loaded with Ubuntu (Kubuntu I believe actually since its running KDE and I do not think that is the default with Ubuntu?) which was fine, I like the aptitude package manager, however I noticed that web pages loaded PAINFULLY SLOW. I tried anumber of things but when I saw that it was really new sites that I hadn’t yet visited that were loading slowly I guessed DNS.

Upon googling Ubuntu 12.10 DNS I saw there were a number of people complaining. I decided to give it a try and set my /etc/resolv.conf to Google’s DNS servers:

nameserver 8.8.8.8
nameserver 8.8.4.4

And now it runs like charm..

Using UDP in the Python Tornado Framework

A little while ago I was working on an API endpoint that needed to ask the BitTorrent Live video streaming trackers how many people were watching what swarms. I needed to do this by sending the byte 4 to the tracker on a certain ip and port. We were using Tornado. Previously to use UDP sockets with the Tornado event loop (in my python DHT project for example) I just created a non blocking UDP socket and added a handler for the READ state.

self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.io_loop.add_handler(self.sock.fileno(), self.handle_input, self.io_loop.READ)

udpsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
udpsock.setblocking(0)

The callback for this handler looked like this:

def handle_input(self, fd, events):
    (data, source_ip_port) = self.sock.recvfrom(4096)
    bdict = bdecode(data)

    #Got a response from some previous query
    if bdict["y"] == "r":
        self.handle_response(bdict, source_ip_port)

    #Porb gonna have to ad a listenr socket
    #Got a query for something
    if bdict["y"] == "q":
        self.handle_query(bdict, source_ip_port)

I believe this is an appropriate way to handle UDP sockets in Tornado (as the library only comes with TCP/HTTP based clients I know of..). However my friend Kyle Grahel put together a nice UDP Wrapper class that is much closer to and even takes methods from the general Tornado IOStream class.

Using the UDPWrapper I was able to do something like this (I actually added an enter and exit for the with however I am not sure if its actually very pythonic to do it that way versus try finally):

udpsockwrapper = UDPSockWrapper(udpsock, in_ioloop=io_loop)
response = None
with udpsockwrapper:
    udpsockwrapper.sendto(chr(4), (tracker_ip, int(tracker_port)))
    response = yield gen.Task(udpsockwrapper.read_chunk)

You may notice the yield gen.Task above? This is using Tornado’s awesome gen library. It basically allows you to turn your functions into generators which the event loop basically iterates through as it hits your callbacks. This allows you to take your nested callback code and turn it into a synchronous style. I believe this is similar to the Deferred class you yield with in the Twisted framework.

My modified version of the UDPWrapper:

import tornado, time

#From Kyle Grahel - http://kyle.graehl.org/
#The __enter__ and __exit__ are added by me.. probably not the best way to use
#these though..
class UDPSockWrapper(object):
    def __enter__(self):
        return

    def __exit__(self, type, value, traceback):
        self.close()

    def __init__(self, socket, in_ioloop=None):
        self.socket = socket
        self._state = None
        self._read_callback = None
        self.ioloop = in_ioloop or tornado.ioloop.IOLoop.instance()

    def __repr__(self):
        return "<UDPSockWrap:%s,rc:%s>" % (self.socket.fileno(), self._read_callback)

    def _add_io_state(self, state):
        if self._state is None:
            self._state = tornado.ioloop.IOLoop.ERROR | state
            #with stack_context.NullContext():
            self.ioloop.add_handler(
                self.socket.fileno(), self._handle_events, self._state)
        elif not self._state & state:
            self._state = self._state | state
            self.ioloop.update_handler(self.socket.fileno(), self._state)

    def sendto(self, msg, dest):
        return self.socket.sendto(msg, dest)

    def recv(self,sz):
        return self.socket.recv(sz)

    def close(self):
        self.ioloop.remove_handler(self.socket.fileno())
        self.socket.close()
        self.socket = None

    def read_chunk(self, callback=None, timeout=4):
        self._read_callback = callback
        self._read_timeout = self.ioloop.add_timeout( time.time() + timeout, 
            self.check_read_callback )
        self._add_io_state(self.ioloop.READ)

    def check_read_callback(self):
        if self._read_callback:
            # XXX close socket?
            #data = self.socket.recv(4096)
            self._read_callback(None, error='timeout')

    def _handle_read(self):
        if self._read_timeout:
            self.ioloop.remove_timeout(self._read_timeout)
        if self._read_callback:
            try:
                data = self.socket.recv(4096)
            except:
                # conn refused??
                data = None
            self._read_callback(data);
            self._read_callback = None

    def _handle_events(self, fd, events):
        if events & self.ioloop.READ:
            self._handle_read()
        if events & self.ioloop.ERROR:
            logging.error('%s event error' % self)

Another side note is that read_chunk above has the keyword argument callback. This is a requirement for the gen.Task class. The function that it executes should have a keyword argument callback=None. In order to convert any function to a function with this callback kwarg I used this lambda:

lambda **kwargs: db.get_item('users', {"HashKeyElement": {"S": username}}, kwargs['callback'])

You can then use them in gen.Task:

yield gen.Task(
            lambda **kwargs: db.get_item('users', {"HashKeyElement": {"S": username}}, kwargs['callback']))

Open File From Console Python Traceback in Text Editor

Often when I am programming I want to be able to quickly find a function/class definition when I hit a traceback. My normal dev environment is basically just Sublime Text 2 and the OSX console. My Co-Worker Brahm Cohan figured out a cool trick the other day from one of his friends I believe.

Basically, you:

Open, the Automator app. Create a new “Service”.

Service Project in Automator

The service should execute the following (you can replace Sublime with whatever editor you use):

open -a Sublime\ Text\ 2 $1

Sublime Text command

Then open up System Preferences. Goto Keyboard under the Keyboard Shortcuts menu scroll down to find your new service. I named mine tosublime and set the Command-L combo to run it.

Perferences Keyboard window

Now I can select the absolute path of a file in my python tracebacks and hit Command L. This opens up the file I need to start debugging in. With a little more work I am sure you could parse out the line number too.

Unlimited Vacation

I always laugh when I see jobs with unlimited vacation time. This seems like a joke to me. At any company I have ever worked for it seemed like people had vacation time rolling over from previous years or were in fear of losing it for not using it. There seems to be a guilt about using vacation time.

I personally am always worried about using mine, I have this illogical guilt. I feel like I needed to be around working as hard as possible as often as possible or else I will be viewed as a slacker. If anything having an alloted amount of vacation time is a blessing. It represents a company wide accepted amount of time for me to take off. If somone gave me “unlimited vacation time” I would never take any vacation because I would always feel like I was taking advantage, or at least like I was being percieved as un involved. It makes me wonder if companies realize this and use this “benefit” to their advantage.

I would love to hear from anyone working at a company with this benefit and hear about the politics behind using it.

Points from comments here and hacker news

It was great to see so many different points of view in the comments both here and on Hacker News. After reading a few I felt I should clairify my reasoning a little.

The “Unlimited Vacation” benefit seems to be more advantageous for companies and possibly misleading for the employees. This [article])(http://finance.yahoo.com/blogs/the-exchange/unlimited-vacation-time-ultimate-benefit-160807503.html) makes the claim that with the unlimited vacation benefit the limiting factor is really “can you get your work done?” The ironic thing is that in any smaller tech company, which are the companies I see offering this, there is always work to be done. There has never been a point in the last two years where I personally felt like it was a good time to take a vacation. There has always been some feature we imminently needed. Point being, theres always work to be done so there’s rarely a good time to take advantage of your “unlimited vacation”.

It also saves the company a full paycheck or so when the employee leaves. As anemitz (a Hacker News commenter) pointed out:

“Another less likely employer benefit to be considered is that in roughly half of that U.S. states, employers must pay out accrued vacation time if there is a policy in place.

An example from California’s vacation faq (http://www.dir.ca.gov/dlse/faq_vacation.htm):

‘For example, an employee who is entitled to three weeks of annual vacation (15 work days entitlement per year x 8 hours/day = 120 hours vacation entitlement per year) who quits on August 7, 2002 (the 219th day of the year) without having taken any vacation in 2002, who has no vacation carry-over from prior years, and whose final rate of pay is $13.00 per hour, would be entitled to $936.00 vacation pay upon separation’ ”

By giving employees unlimited vacation time and not giving them an alloted amount of PTO it seems like the company won’t have to keep track and pay the employee for their remaining PTO when they leave.

And again, my guilt about taking a vacation. Personally, if I do not have a quantified amount of expected vacation time I will not know how much I am expected to take and probably wont take much at all.

On the other hand, as dvo pointed out:

“ It might be a great policy, and I’m sure it depends on the details of how it is implemented and the culture of the company where it is implemented… ”

Surf Journal

I think there is a great opportunity to take advantage of all the great NOAA information out there provided by the government for free. With this I can use supervised learning with input from professionals and surfers using the site. I would gather professional input from the other surf forecasting sites out there and allow members to sign up and chart their own surf journals.

This would build surf journals for not just the popular spots like those you see on surfline but for anyspot a surfer desires. He could specify the closest booies and which area the secret spot was in and I would gather wind, tide, and swell information mostly to build a profile of the spot.

This blog post is actually being used as a landing page to gauge interest via a Google AdWords campaign. I will set this as the destination and hope people click through my ads. If so I know there are people interested in this idea as well.

Random Idea About Language Implementation and Strings

I was reading this awesome post about why this guy appreciates erlang.

And this paragraph caught my eye.


Or take string concatenation. If you pop open the implementation of string concatenation in Perl, Ruby, or JavaScript, you’ll are certain to find an if statement, a realloc, and a memcpy. That is, when you concatenate two strings, the first string is grown to make room for the second, and then the second is copied into the first. This approach has worked for decades and is the “obvious” thing to do. Erlang’s approach is non-obvious, and, I believe, correct. Erlang does not use a contiguous chunk of memory to represent a sequence of bytes. Instead, it represents a sequence of bytes as nested lists of non-contiguous chunks of memory. The result is that concatenating two strings takes O(1) time in Erlang, compared O(N) time in other languages. This is why template rendering in Ruby, Python, etc. is slow, but very fast in Erlang.


I just thought it would be a cool little mini project to try and re implement Python and or Ruby strings to act like Erlang strings (the linked list of bytes versus contiguous). I am kind of using this as a note to myself since writing things down in other places just gets lost a lot of the time..

It would be cool to do this though and then compare the new ruby build with the previous in a bunch of different speed string tests.

After talking to some friends at work I realized that this already done. Using the rope data structure. A cool implementation, librope is a very interesting read.

After checking the librope implementation out I realized that using these strings would only be a benfit in very specific situations, long modifiable strings.

However, having an implementation accessbile in a standard library would deff be a cool thing to have. I could see this being super helpful when doing something like building a website response from templates.

Actually from a quick google it sounds like this is how PyPy implemented strings.