Conroy Whitney is a Software Engineer, Web Developer, and Android Application Developer. http://www.google.com/profiles/conroy.whitney
Posts by Conroy Whitney
This post is in reference to Andreas Antonopoulos: The Future Of Crypto-Currency. There’s a lot of information there. Even though it’s almost 30 minutes long, it’s totally worth checking out.
Andreas starts this talk by addressing concerns stemming from misconceptions that many people (including myself, before this lecture) have today about crypto-currencies, namely:
- How many crypto-currencies are we going to end up with?
- How will we keep track of them all?
- And, more importantly, which one is going to win out?
To address the first question, Andres proposes that since anyone can create an alt-coin for any reason whatsoever (such as Doge-coin), that we will likely end up with hundreds of thousands of different currencies. The purchasing power of the currency may vary (e.g., how many eggs can you buy with it); that would be entirely dependent on the concept on which the currency is based. But that’s the point: since you can base it on anything, there is no limit to how many different types we will end up seeing. Andreas gives the example of a class of 5th grade students creating “Joey-Coin” and “Susie-Coin”, which end up being a popularity contest of which person is the coolest in the class. How many eggs you could buy with a Joey-Coin is another question altogether. But the take away is that trying to control or otherwise limit the creation of new alt-coins would be counter-intuitive to the core concept of crypto-currencies, so we’re going to have to learn to live with it.
So how will we keep track of them all? To answer this question, Andreas proposes that we will have wallets and exchanges that are able to calculate exchange rates between currencies. I am free to hold on to whichever types of currencies that I wish; but to use them, I might have to exchange them for another form at whatever the going rate is. This is no different from traveling abroad, though. In Peru, many businesses accept the Nuevo Sol, USD, and sometimes Euros. But if you have other currencies, you must first exchange them before use. Andres goes on to hypothesize that we might even have some baseline currency, like the LIBOR or the S&P or the CPI, which might be used to describe the exchange rates between currencies. Not to control or peg currencies, mind you, just as a useful expression of relative value. Right now we compare the “price” of BitCoin as relative to the USD because many people are still thinking of BTC in the old model as an “investment”, rather than in the new model as a useful form of purchasing power. When that changes, so will our way of thinking about relative value of alt-coins.
But the main point that Andreas addresses in this talk is about this misconception that certain currencies will win out and become the new de-facto replacements. He likens this unto the idea that for a long time, you had to rely on major, authoritative newspapers or magazines to learn about a particular current event. Still, even today, we mostly get our news from “main stream” sources. This almost monopolistic control of content stifled discussion, disseminated incorrect information, or sometimes even allowed for downright manipulation of the public. When blogging became popular, with the whole new “Web 2.0″ in the early 2000s, you began to have thousands of people discussing topics from different angles. The choice of what information you digest became a vote of sort for that particular blogger. News became less top-down and more bottom-up.
So just as what we read became a vote in how we see the world, so which currencies we use will be votes in how we want society to be. This means that by using a given currency, we are part of a community who expresses the same values and politics. People who believe in inflationary economic policy will use an inflationary currency. Those who believe in a gold-standard will use a fixed currency. Andreas gave the example that we could have charity currencies that take a fraction of a percent of each transaction and use it towards some social good like homeless shelters or planting trees. Which currency you use is no longer the result of where you live or what you do. It’s a conscious choice that you make; a vote; a decision to engage with a certain political, philosophical, economical, or social ideal. Or to not engage; that also is your choice.
These currencies will exist alongside each other; they will not be competing, per-se; but will complement one another in the same way that other forms of communication media do today. Andreas points out how communication today shifts between different media almost seamlessly: conversations between you and someone else may use texting, voice calling, video chatting, emailing, and (heaven forbid) real in-person, face-to-face talk-talk. Each form of communication has its own strengths and weaknesses; and each complements the other to some degree.
When you choose a particular medium (like AIM back in the day, or Twitter / Facebook / LinkedIn more recently), you are choosing to belong to a particular community with whom you wish to communicate. You use Twitter to talk with people on Twitter; you use Facebook to talk with people on Facebook. You can use both; and you can talk with some of the same people on both; but their uses differ; what you communicate differs; and how you communicate it differs. Similarly, choosing which alt-coins you hold and use will express a desire to associate and transact with individuals who also use those coins. There’s no better example of this than JuggaloCoin: an alt-currency for person-to-person transactions within The Family. Important to note: don’t buy that coin if you’re not a Juggalo. They take that shit seriously, bro.
I said Andres’ talk covered a lot, didn’t I? So one last thing, and that is this idea of sovereignty. We all know money and power are related. In the past, power created money. But now money can be created, and then people give it power by using it, and then the people can use that power to whatever ends. So now money becomes the tool and we are the masters; rather than today, where money is the master and we are its slaves. I’ll end with a direct quote from Andres at the end of his talk (about 24 minutes in) that addresses this in a more eloquent way:
Currency, in the end, is really a form of language; it’s a language by which we communicate our expectations and desires of value. Now that we can do it on such a massive scale — now that everyone can create currency — our choices will really matter.
We’re past the zero-sum game. This isn’t about nation-states anymore. This isn’t about who adopts BitCoin first or who adopts Crypto-currencies first; because the Internet is adopting crypto-currencies. And the Internet *is* the world’s largest economy; it’s the world’s most populous economy; it’s the first trans-national economy; and it needs a trans-national currency.
What we’ve really done, is we’ve inverted the very basic and most fundamental equation of currency which is: that for millennia, until the year 2008, sovereignty defined currency. Sovereignty was the basis upon which currency could be created. And that currency allowed that sovereignty to be expressed. The monopolistic control of currency *is* the basis sovereignty.
And now the Internet has a currency. And the Internet is going to use that currency to create sovereignty. After 2008, currency creates sovereignty; and the Internet has its own currency. Which means the Internet has purchasing power; which means that the Internet has economic freedom; which means that the Internet can exert economic freedom in a post-nationalistic way; in a way that ignores borders and makes the nation state — not obsolete — but simply less relevant.
Sometimes, a single idea can completely rebase your perception of what is possible. For me, Shamir’s Secret Sharing is one of those ideas.
Imagine there’s a special kind of safe. Inside the safe, a you and 4 other people store a bunch of money that you all agreed you will only use when a certain number of people (let’s say a simple majority, so 3 out of 5 of you) decide to make the purchase. You don’t know these people, so you aren’t able to trust that 1 or 2 of these people might get it into their heads to take the money and run. But since this safe is so special, you don’t have to trust them in order to benefit from this special safe.
How is the safe special? Instead of it being openable by inserting a single key which one person has to keep secure and promise only to use at the appropriate times, this safe has 5 keys and 5 slots where each of the 5 members can put their keys. The safe is designed to open with at least 3 of those keys are inserted, but will remain locked if only 1 or 2 keys are used. This means that in order to access the funds inside 3 of 5 members have to insert their keys to open the safe. As long as you keep your own individual key securely hanging around your neck, you can always have confidence that if money is taken out, at least 3 people in the group agreed to take out the money, and no minority of people was able to go against the majority’s wishes.
Why is this important? In a word, Socialism. OK, maybe the word “Socialism” seems a bit extreme. But think about what sort of things can go inside the safe besides money — “the means of production, distribution, and exchange”, perhaps? I’ll leave these sort of thoughts as an exercise for the reader. The important concept though, though, is that there is something that is shared in common (whatever is in the safe), and a decision on when and how to use that item can only be reached when a certain percentage of the group agrees. All votes are equally important. No action can be taken without buy-in. And, most importantly, no central authority is needed to count votes and declare “the yea’s have it”.
When your goal is to remove the necessity of an authority figure to mediate, coordinate, and arbitrate decision-making, in a world where you cannot necessarily trust your peers, you’ve got to carefully consider all the tools available to those ends. I believe Shamir’s Secret Sharing algorithm is one such tool and will play an important role in the future of decentralized, autonomous collectives.
P.S. If you are interested in further reading, this algorithm came to my attention through “Bootstrapping an Decentralized, Autonomous Corporation” which I found while reading the White Paper on Ethereum. You know, if that’s what you’re in to.
I often think about whatever job I’m doing as a “struggle against entropy”. Entropy, as Muse explains it:
All natural and technological processes proceed in such a way that the availability of the remaining energy decreases.
In all energy exchanges, if no energy enters or leaves an isolated system the entropy of that system increases.
Energy continuously flows from being concentrated to becoming dispersed spread out, wasted and useless.
New energy cannot be created and high-grade energy is being destroyed.
An economy based on endless growth is unsustainable.
We all know that entropy is going to win, eventually. That’s considered a given in physics. And empirically we can say that’s true in our own lives. Our laundry goes from clean to needing to be cleaned; our meals go from cooked to eaten; our bodies go from born to dead. Even when we make efforts every day to slow or reverse these processes, eventually, inevitably, entropy will win.
Some jobs, the amount accomplished vs. the effort put in forms a sort of logarithmic graph. For example, working as a bartender: at the end of the day, after all your customers have been served, your bar is clean, your glasses are clean, your trash is taken out, you have re-stocked, and you are ready for the next shift. If you don’t do everything well, you’ll be behind the next day, and things will be harder going; the lack of effort compounds to form a morass of difficulty. However, there is no getting ahead; even if you clear every ashtray and wipe every counter as soon as they are dirtied; even if you restock every cold beer as soon as it is served; you still can only reach a certain output. You still, at the end of the day, will end up with clean floors, clean glasses, and a fresh start for a new shift. Any optimizations on top of that (a better inventory or POS system, batching work when mixing drinks, etc.) only gains a small amount of additional output. Hence the logarithmic graph. Eventually you come to the limit of how much you can optimize that type of job.
Other jobs, however, form an exponential graph in terms of amount accomplished vs. inputted effort. Technology plays a key role in these situations. Any time when you can replace work that a human normally needs to do with a (semi-)automated system, you are increasing your output relative to input. And if your job is to create said technology, then your input outputs an output that in return multiplies future inputs: exponential. It’s acceleration: applying the same amount of force equally over time results in a compounded velocity. It’s a form of bootstrapping: companies (specifically startups), building ideas and systems around existing ideas and systems until they can impact 10, then 100, then 1,000, then 100,000, then 1,000,000 users; not with 1,000,000 times the effort, as it would be if you were trying to run a bar with 1,000,000 customers; but instead with (roughly) the same (probably less than a factor or two of 10) amount of man-hours.
And that’s our ultimate weapon in the struggle against entropy: our time. Some things require linear relations for our efforts: sleeping, exercise, hobbies, volunteering, time with family, watching TV. But for me, work — where I spend at least 1/3 of my day — is an area where I constantly look for areas to improve my multiplier. If there is a task that I am frequently needing to perform that seems like a good candidate for automation, I need to automate it. Why? Because if I don’t, I feel like I’m stuck in a hamster wheel running in the same circle over and over. I don’t want to solve the same problems every day, day in, day out. I want to solve new problems; interesting problems; problems that make tomorrow better than today. Because these types of problems mean that in the struggle against entropy, I’m not just entrenched, holding the line, content to not be losing ground; no, I’m winning battles, advancing forward, and marching towards a goal that grows ever-nearer. And in this struggle against entropy, technology is my rearguard, allowing a retreat to a safety zone of automated processes and analyzable numbers so that even when I lose a battle, all is not lost. We can pick up our wills and fight the struggle against entropy for yet another day.
An oldie but a goodie from 1990 on choosing names for servers: http://tools.ietf.org/html/rfc1178.
Most of the document is about what *not* to do and why, but here’s really the core of what is considered best-practice when choosing a naming scheme for multiple machines in a group:
Naming groups of machines in a common way is very popular, and enhances communality while displaying depth of knowledge as well as imagination. A simple example is to use colors, such as “red” and “blue”. Personality can be injected by choices such as “aqua” and “crimson”. Certain sets are finite, such as the seven dwarfs. When you order your first seven computers, keep in mind that you will probably get more next year. Colors will never run out. Some more suggestions are: mythical places (e.g., Midgard, Styx, Paradise), mythical people (e.g., Procne, Tereus, Zeus), killers (e.g., Cain, Burr, Boleyn), babies (e.g., colt, puppy, tadpole, elver), collectives (e.g., passel, plague, bevy, covey), elements (e.g., helium, argon, zinc), flowers (e.g., tulip, peony, lilac, arbutus). Get the idea?
We’ve all been burned providing our email address to websites. Maybe you did it out of the best intentions: wanting an e-book; or a set of tips sent by email; or simply just to get past a content gate. But you always know, deep down, that your inbox is going to hate you for subjecting it to yet another onslaught of poorly-targeted “email blasts” or other such spam.
Have you ever been linked to a blog post where, before you can even get a chance to read the first sentence, a modal pops up asking you to “SUBSCRIBE TO OUR BLOG SLASH NEWSLETTER! FOR THE WIN!”. You don’t even know who they are, or what they do, or what other type of content is on their site. There’s no way in hell you’re going to give out one of your most guarded public secrets (your email address) to these complete strangers.
So if you are so wary of providing your email address, why do you expect your users to?
Don’t ask users for their email address at the expense of them being able to actually use your site. It’s a turn-off, and they’re not going to convert. Instead, how about asking them in a CTA at the bottom of the article? And how about, instead of it being “Subscribe! Subscribe! Subscribe!”, perhaps it can be “Learn more about X Topic You Just Read About.” Imagine that. Email subscriptions targeted to the type of content they are interested in.
tl;dr Yes, ask for emails; but be classy about it. You might just end up with higher value leads.
If you are seeing this capybara error:
Unable to find css "[element_id]" (Capybara::ElementNotFound)
Make sure that you actually call
visit before trying to
find an element!
I was trying to use a regular capybara selector
This should have been cut and dry. Since capybara’s
find() selector uses
#id by default, and since my page had that specific element on it, there should not have been any issue. But no matter what I tried — more different selectors, xpaths, etc — I kept getting the same frustrating error:
Unable to find css "[element_id]" (Capybara::ElementNotFound)
I thought I was going crazy, or that I had completely lost my ability to program, or both!
My problem was that I had forgotten to call
visit before calling
As soon as I called
visit, my error went away
visit '/' find("element_id").should be_visible
This makes sense, though, right? If you don’t call
There is no HTML content in which to find your element!
It would be nice if capybara had a more descriptive error when you try to call
find without having first called
visit. That way this type of error would not frustrate so many people for so long!
I hope this helps!
Have you received this error message while trying to run bundler, or start your rails server?
did not find expected key while parsing a block mapping at line 1 column 1 (Psych::SyntaxError)
The problem is that your
database.yml file is not being parsed properly because of inconsistent indentations.
Can you pick out what’s wrong with this code?
default: &default adapter: postgresql encoding: unicode pool: 5 development: &development <<: *default database: <%= ENV["HEROKU_POSTGRESQL_DATABASE"] %> username: <%= ENV["HEROKU_POSTGRESQL_USERNAME"] %> password: <%= ENV["HEROKU_POSTGRESQL_PASSWORD"] %> host: <%= ENV["HEROKU_POSTGRESQL_HOST"] %> port: <%= ENV["HEROKU_POSTGRESQL_PORT"] %>
This code causes the
did not find expected key while parsing a block mapping error while trying to start the rails server.
The underlying problem is that the YML inheritance line
<<: *default is indented at the same level of
development: &development instead of the same level of
database:. The correct code would be:
development: &development <<: *default database: <%= ENV["HEROKU_POSTGRESQL_DATABASE"] %>
This should get rid of the
did not find expected key while parsing a block mapping error you receive when you try to run your rails server or bundler, or anything that parses your
Hopefully this solves your problem. Good luck!
All day long, our brains are chugging away, one thought after another. Do I press snooze or go for a run? Oatmeal or smoothie? Black shirt or blue shirt? Time to go to work. Work, work, work. OK, work’s done. Do I go for a run? What’s for dinner? Go to bed or open one more reddit link?
When we work, is it because of what we want to give to the job? Or because of what we want the job to give us? In the first situation, we are motivated by the work itself; we have clear goals and reasons for participation and, if they are no longer being met, we are free to switch our focus to something else on which we want to spend our time. In the latter situation, we are motivated by the effects of the job, such as money or career advancement; the job is not inherently special to us; it’s simply a means to an end.
Take a moment and reflect on why it is that you do what you do. Have some meta-thoughts: thinking about why it is that you are thinking what you’re thinking. The only way to change the future is to notice the present decide to act.
One might argue that the more versions of ruby and rails your gem supports, the more valuable it is to the general population. Well, I guess it does also depend on what your gem does (rainbows and unicorns?).
Thoughtbot has done it again with appraisal, a neat and nifty gem for testing your gem in different ruby and rails environments. It’s especially useful when combined with Travis to specify which continuous integration environments should be used or ignored. (see gringott’s travis.yml for an example).
Note: appraisal’s README on github is curently for for the 1.0.0.beta2 version. This can be confusing since the current rubygems version (and what you get if you just do
gem "appraisal" is 0.5.2. You can either use the beta version with
gem "appraisal", "1.0.0.beta2", or, you can use the 0.5.2 version of the README
As mentioned in the README, you can run your spec and cucumber tests against these ruby/rails versions using appraisal. What’s not mentioned, though, is that you can also run your local webserver against your different appraisals as well.
Since appraisal essentially just pre-compiles the bundles that you are going to use for your different appraisals (e.g., rails-3.2, rails-4.0), you can use that to your advantage:
First, find out the path to the bundle that you want to test locally:
Look for the line that like:
bundle check --gemfile='/pathbot/to/rails/engine/gemfiles/rails_4.0.gemfile'
We are going to hand that path to our webserver so it knows what bundle to use. However, instead of using the –gemfile option, we are going to pass it as an inline environment variable:
BUNDLE_GEMFILE='/path/to/rails/app/gemfiles/rails_3.2.gemfile' bundle exec rails server
Or, if you are developing an engine, you can run your dummy app’s rails server using
BUNDLE_GEMFILE='/path/to/rails/engine/gemfiles/rails_3.2.gemfile' bundle exec spec/dummy/bin/rails server
Note: spec/dummy is the location of your engine’s dummy app. This might instead be in test/dummy, depending on how you set your engine up.
Running a local webserver against different ruby/rails appraisals is useful for being able to manually play around with why a particular test might not be working in a particular bundle (e.g.,only on email@example.com).
One last thing, maybe this was just something quirky on my end, but I ended up having issues when running my rake tests against my rails-3.* appraisals. I kept encountering this error:
undefined method `migration_error=' for ActiveRecord::Base:Class
Unable to find a solution, I am embarassed to say that I instead took the lazy (and very time-consuming) way out of repeatedly pushing to github after every commit so that Travis would run my rails-3.* tests for me so I could debug where my issue might be occurring. That wasted not only a lot of my time (5 minutes between fix and result), but also a lot of Travis’ server hours… Sorry guys! Let me know how I can show you some love.
Anyway, the hidden clue to the solution to the
undefined method issue was hidden in one of the comments on one of the unaccepted answers of one of the StackOverflow questions. As Iliya Stepanov points out,
It was one string in config/initializers or environment. I don’t remember exactly what string it was, but check carefully that files if you rolling back from Rails 4 to 3 and facing similar problems.
It turns out that in
spec/dummy/config/environments/development.rb there is this line:
config.active_record.migration_error = :page_load
I commented that out, and everything worked hunky dorey. It doesn’t make me happy to think that I am suppressing potentially useful errors, but there were no migrations that needed to be migrated. Also, I’m pretty sure that might have only been related to the fact that I created a rails4 engine, then was trying to backport its dummy to rails3. Pretty sure.
Rails 4 introduces a new way of signing cookies that differs from the previous method in Rails 3. When you upgrade to Rails 4, you are likely to receive a warning:
DEPRECATION WARNING: You didn't set config.secret_key_base.
As pointed out in the guide for upgrading rails, you can simply run
to generate a new secret, and paste that into
However, do we really want this crucial security key to be hard-coded in our application and pushed to our repository? What if our repository is public, like in the case of an open-source app like the gringotts demo?
Well, my friends, we can simply use an ENV variable to store the secret. In
config/initializers/secret_token.rb, put this line:
*YourRailsApp*::Application.config.secret_key_base = ENV["SECRET_KEY_BASE"]
Now, we can check in
config/initializers/secret_token.rb without worrying about anyone ever seeing the secret key that our application uses to encrypt cookies. But now we need to make sure that
ENV["SECRET_KEY_BASE"] will actually be set, both locally and for Heroku.
Locally, we can use the nifty figaro gem to set ENV variables quickly and easily. Following the instructions on the figaro github page, we add
gem "figaro" to our
rake figaro:install, then edit our
config/application.yml to add the line:
SECRET_KEY_BASE: (really long string output of rake secret)
For Heroku, we could use figaro’s helper rake task for updating Heroku’s config vars (
rake figaro:heroku). However, what if we accidentally check in our
config/application.yml file? We’d be sharing the secret key used to encrypt our cookies on our production server. Noooo good!
Preferably, we can generate a separate secret key for Heroku
heroku config:set SECRET_KEY_BASE=$(rake secret)
Just realize that when you change your secret key base, all previous versions of your cookies will no longer be valid.
And that’s that. We are now Rails 4 compliant (no more DEPRECATION warning), and we have the added bonus of keeping our secret a secret.