For the past few months, I’ve been working on swagger-tools, a JavaScript library that exposes various
utilities for working with Swagger documents. Need a command line utility to validate your Swagger
document(s)? swagger-tools has it. Need an API for validating your Swagger document(s)? swagger-tools has it.
Want to use your Swagger document(s) to validate your API requests or to wire up your request handlers? swagger-tools
has it…and much more. But this post isn’t about swagger-tools, sorry for the shameless plug.
There might come a time when it might make sense to expose your Node.js module to the browser. This happened recently
for swagger-tools and I learned a lot during the process. My hope is that this will help shed some light on the
available options and what I believe is a pretty decent approach to this.
Getting Started
When I started this process, the first thing I did was look for existing projects that work in Node.js and the browser.
While the concept is simple, I wanted to see how some of the professionals do it. Since I already use
Lo-Dash, I started there. As expected, an immediately-invoked function expression and a few
statements that check the environment to identify if the code is running in the browser or Node.js. Pretty simple
stuff. Just for due diligence, I looked at a few other projects and they all had the same recipe:
(function() {
// The code for YOUR_MODULE_NAME
if (typeof exports !== 'undefined') {
if (typeof module !== 'undefined' && module.exports) {
exports = module.exports = YOUR_MODULE_NAME;
}
exports.YOUR_MODULE_NAME = YOUR_MODULE_NAME;
} else {
this.YOUR_MODULE_NAME = YOUR_MODULE_NAME;
}
})(this);
This example is quite simplistic but the purpose is to give you an example of how you might do this. (If you want a
good example of how to do this and also how to handle more of the JavaScript module environments, checkout how Lo-Dash
does it. They were even nice enough to support Narwhal and Rhino!) Unfortunately for me, what
allowed these projects to use such a simple approach for writing code that works in both the browser and Node.js was a
luxury I did not have: These projects did not have any external dependencies, including no dependencies on any core
Node.js modules.
The Problem
Any time you have a Node.js module with dependencies, you can’t just port your module to run in the browser. You not
only have to worry about making your code working in the browser but you also have to make sure your dependencies and
their dependencies and so on run in the browser. You will also need to do this for core Node.js modules, which can be
a daunting task.
Let’s assume you can do this yourself, you now need to make sure your code can load modules in the browser and in
Node.js. The problem is that the browser doesn’t have a built-in require
function for loading modules. What do you
do? Do you separate your source base into a Node.js modules that wires things up the Node.js way and a browser module
that wires this up the browser way and figure out some way to share code between them? It’s not as easy as it sounds.
Enter Browserify
Thankfully, there is an open source project called Browserify that handles all of this for you. Changes
are good that if you started the same way I did, you likely ran across numerous posts mentioning Browserify already.
What Browserify does is:
Browserify lets you require(‘modules’) in the browser by bundling up all of your dependencies.
That’s a pretty simple explanation of what Browserify does. It also helps solve most of the issues related to your
code, or code you depend on, using core Node.js modules. (For complete information on the Node.js core module
browser compatibility, view the Browserify compatibility documentation.)
With Browserify, you can generate a browser bundle from your Node.js files/modules. This means one source base for
Node.js and from that same source base, you can generate a working browser bundle. I have used Browserify successfully
on my [carvoyant][carvoyant-js] library and swagger-tools. During the development of each of these
projects, I am yet to need to break my source base up for the applicable parts that could/should run in the browser.
(In swagger-tools, of course I do not include the Connect middleware in my browser bundle but that separation was due
to applicability and not because of some lack in Browserify.)
To avoid listing out the same examples and documentation Browserify uses to explain/justify its use, I suggest you
visit it’s documentation. Instead, I would like to share a few things I’ve had to do while
using Browserify and a new trick I learned to build [Bower][bower] modules using Browserify, complete with dependency
management handled by Bower instead of having to bundle all dependencies with your brower bundle.
Browserify Tips
Making your Node.js module more browser friendly is an iterative process. If you want to use Browserify to take your
Node.js module as-is and create a standalone browser module, it will do that. But chances are good that your first
build will be huge, especially if you use any of the core Node.js modules.
Trimming the Fat
What I’ve noticed when using Browserify is that I tend to start with a large browser bundle and I will then try to
figure out how, if it’s even possible, to make my bundle smaller. The reason your browser bundle is typically so large
is because Browserify will build a browser bundle that includes all of your dependencies in it. (Think of this as
analogous to the Uber JAR in the Java space or a static binary for C/C++/…)
To remove some of the size, some modules like Lo-Dash will allow you to cherry pick the actual module features you use
instead of requiring the whole Lo-Dash module. Unfortunately, not all modules are as flexible Lo-Dash is. Of course,
you can analyze you modules and see if you’re importing large modules or unnecessary modules and refactor accordingly.
In the end, the biggest gain I’ve seen is by instructing Browserify to not include certain dependencies where possible.
For example, let’s say you depend on a module that already has a browser version available. Bundling that module is no
longer a requirement because you can include it either using Bower or a CDN or by shipping their module with your code.
The way to do this isn’t as obvious as you might thing. If you look at the Browserify documentation, you might be
inclined to exclude
or ignore
certain files/modules. This will definitely make your browser bundle smaller but the
bundle will not work when ran in the browser. The reason for this is that excluded/ignored modules are replaced with
undefined
/{}
respectively.
To properly tell Browserify to exclude a module, and to do it in a way that in the browser you can resolve externally
provided dependencies, is to use a Browserify transform. For this purpose, there are many options out there but the
best one I’ve found is exposify. With exposify you can configure how Browserify will resolve modules that
are provided for you externally. For example, if you were to load Lo-Dash using a script
tag, you could tell exposify
that the lodash
module could be provided by _
global variable. To see this in action, have a look at
swagger-tools/gulpfile.js#L57. (Long story short, exposify lets you avoid including
modules in your browser bundle and lets you resolve the runtime dependency by associating a module name with a global
variable by name.)
Thanks to Browserify and exposify, I could create a Bower module for swagger-tools and not include all of the required
dependencies with the generated browser bundle. I was able to set the proper Bower dependencies for modules that had
a Bower module published and do the default Browserify thing by bundling the modules that did not. This saved me
708k or 67% of my file size for my development module and 48k or 37% for my minified production module.
Of course Browserify has transforms for the usual suspects when it comes to exclusion of source maps, minifcation,
uglification, etc.
External Files
One of the things I needed for swagger-tools was to include some JSON files with my module. In Node.js land, this works
by default. In Browserify land, you need to use another Browserify transform called brfs. What brfs does is it
will find any place in your code where you require
a file, fs.readFile
or fs.readFileSync
and it will inline that
file’s contents into your module. So if you were to var schema = require('./path/to/schema.json')
, brfs will make it
so that in the browser bundle, schema
is set to the string content of your schema.json
file.
Building Bundles
As I mentioned above, Browserify builds standalone browser bundles. I think in many cases, these large static binaries
can serve a purpose as they give you a completely safe environment for your module. (This is great for standalone
application binaries where you don’t care if others use your code.) On the other hand, being able to leverage package
mangers to share your module and to create smaller binaries is nice as well. I couldn’t make my mind up so for
swagger-tools, I build both. (To see how I’m building 4 binaries, two standalone binaries and two Bower binaries, for
swagger-tools, check out swagger-tools/gulpfile.js#L35.)
Conclusion
Browserify is a wonderful tool to make it very simple to have one code base for your Node.js module and your browser
module. I find that Browserify was very unobstrusive and that given the right transform, I could basically do all of
the things I needed to build the browser bundle I required. I realize that this was not some walk through or tutorial
but don’t fret, the Browserify Documentation is very easy to read and understand.
When it comes to people and their opinions, Dirty Harry said it best:
Well, opinions are like assholes. Everybody has one.
Dirty Harry (The Dead Pool)
That’s right, everyone has an opinion. No matter the topic, there will always be those that agree, those that disagree
and those somewhere in between that can both agree and disagree based on circumstances. The problem is when someone’s
opinion turns them into an asshole. These people appear to be incapable or unwilling to disagree without feeling the
need to marginalize those they disagree with. To illustrate, let’s look at a recent situation where this was going on.
Over Easter, Twitter was rampant with anti-Christian jokes. They ranged from calling Jesus a zombie to insinuating
that Christians put together the bunny and candy related holiday to entice young kids…and everything in between.
For those that do believe in the story behind Easter, it’s kind of a big deal. Easter to Christians is the day that
Jesus beat death and came back from the grave. I’ll stop there because I’m sure some of you are uncomfortable by now
and the last thing I need is you accusing me of proselytizing and missing the point. Back to the topic.
What we have here is two differing opinions: One believe in the story of Easter and the characters involved while
others do not. No one can take away your right to have a different opinion but marginalizing people with whom you do
not agree with is not right and that’s exactly what was described above. Don’t believe me? Let’s look at the meaning
of the word marginalize:
mar·gin·al·ize
verb - treat (a person, group, or concept) as insignificant or peripheral.
“attempting to marginalize those who disagree”
After reading the definition, it should be clear that when you begin to treat people as if their thoughts/opinions are
insignificant you are in fact marginalizing them and that is what these jokes are doing. These jokes are basically
saying: I do not value your beliefs, and by extension you as a person. In fact, I want the whole world to know so I
will make a mockery of you in public. You may think these jokes are harmless but someone out there could be hurt by
what you are saying. Just because you disagree doesn’t mean you should go out of your way to hurt people but that is
what is going on. We should accept that there is a disagreement and treat all involved with the same level of respect
you would expect from them. Hurting people is wrong.
Let’s look at this situation from a different perspective because this isn’t just about marginalizing Christians.
What if these jokes were gay jokes? What if they were jokes about race or gender? I would ask what if these jokes
were sexually insensitive but we already know the answer to that one thanks to Donglegate. When asking
these questions it doesn’t take long for the answer to come: The Social Justice Warriors
would be out in force and it would be a mob-style witch hunt. Jobs would be at stake, reputations would be at stake,
etc.
That’s the way it should be, although I suggest that we use a little more tact when handling these types of things.
We as a collective should stand up against all types of marginalization. No one should feel the sting of being
marginalized.
So why is it we disallow marginalization based on sexuality, race, gender, etc. but it’s completely fine to marginalize
those with religious beliefs? Why is it we allow Mozilla employees to stand against marginalizing homosexuals yet the
same employees are involved in marginalizing Christians with their hurtful Easter jokes? How is it fair that we can
pick and choose who can be marginalized and who cannot…and who gets to make that decision? We cannot behave this
way.
In the end, it all comes down to how we see and treat people. The point at which you let your opinion turn you into
an asshole, you begin to marginalize those with whom you interact with. Do it enough and you become insensitive to
the point where you can hurt people without even knowing it.
I was raised to treat others the way you would expect them to treat you. That being said, I don’t think any of you
would want someone marginalizing you or being a prick to you so why do it to others? Next time you want to tell some
“joke”, try to put yourself into the shoes of someone who might read this and be offended before you open your mouth.
It’s this kind of thinking, this type of value system, that just might keep you from becoming an asshole.
Marginalizing people is easy…but do you care enough to not contribute to it?
Whenever we feel cheated as a customer, it’s our first reaction to tell the world about it. Leaving a bad online
review, lashing out on social media, posting the experience online for the world to see…it’s become the norm. Heck,
there is a current news story titled: Scathing Yelp Review Could Cost Woman $750K. What about
when things go right? Where is the “Amazing Yelp Review Could Earn Woman $750K”? While that will likely never happen,
I do think we should be more equal in dishing out our online opinions. Don’t only shout to the masses when things go
wrong, do the same when things go right. And that’s my plan for this post, I want to tell you of a recent interaction
with Brooks Running.
I started running back in 2010 but I didn’t really hit my stride until last year. At the end of 2012, I bought a pair
of Brooks Pure Connect running shoes. I bought them because they looked great and they were lightweight. Over the
next year, I wore the soles off of these shoes. They lasted over 300 miles of actual running distance, a
Tough Mudder, two Mud Brigade races, a Crazy Legs 10K and countless
runs. These shoes were amazing, best pair of shoes I’ve ever owned.
When it came time to replace them, the Brooks Pure Connect 2 had just come out and I expected the same from them
but I couldn’t find a pair that fit right. So I went to the internet and found that 6pm.com had a few
different pair of the original Brooks Pure Connect shoes, new in the box. Sure, they were a year model old but
I had proven these shoes were what I wanted. I bought three pair. I wanted to make sure I had the shoes I wanted for
as long as I could. So I started running in them, spreading the load between the three pair to ensure they’d last me
as long as possible.
One day I finished a 4 mile run and I noticed one shoe was a little looser than the other. After looking into it, I
noticed the elastic tongue strap on one shoe was hanging by a thread. This shoe only had 20 miles on it. The other
two pair had the same mileage on them and looked out of the box new. This is weird so I reach out to Brooks
Customer Support.
When you reach out to Brooks Customer Support, they ask you to describe the shoe, the age, the mileage and to include
a picture of the defect. I filled out their form, submitted and waited. My initial response from them could be
summarized like this: Your shoe is the original Brooks Pure Connect, we just released the Brooks Pure Connect 3 so
the best we can do is offer you 25% off your next purchase. This is a reasonable response but the shoe was only
4 months old and only had 20 miles on them and I really felt this was a defective shoe because of the nature of the
tear.
I responded back but when I did, I made sure to be as tactful as possible. I told them I was disappointed because the
shoe was defective, not something that can manifest by sitting in a box on a shelf I added, and that I was stuck with
a shoe I couldn’t wear, a shoe that other than this issue looked out of the box new. The next response I got back was
a breath of fresh air. They basically said that my respectful response was appreciated and that they had decided
that they would honor the warranty from purchase, something they surely didn’t have to do, and send me a new pair of
the Brooks Pure Connect 3 shoes.
I was blown away. I didn’t expect this after they had already said there wasn’t anything they could do beyond their
standing offer in the previous email. I figured my email would be skimmed and put into some virtual folder. In the
end, I was wrong and Brooks went above and beyond. The way they treated me has created a Brooks customer for life.
Note: I cannot guarantee this will be your outcome in a similar set of circumstances, nor should you consider this
to be the norm. This story is to showcase an awesome example of customer service.
Ever since SteamOS released, I’ve been trying to get it installed into VirtualBox. When I
first started this process, I found the same SteamOS Basic Guide Installation on VirtualBox
everyone else was using, or copying, and began trying to make this happen. Since VirtualBox abstracts all the
hardware, I figured the installation process would be identical regardless of my operating system. For the most part,
that is right. Other than the creation of the ISO file, the installation process is identical regardless of your
operating system. Unfortunately, this is where things go wrong for Mac users as there was no solid documentation on
how to create the ISO on Mac OS X.
Mac OS X includes utilities out of the box for things like this. My assumption, like many others based on the
suggestions I found, was that I could use these tools to create an ISO from the provided zip file. Even better news
was that the suggestions I were finding aligned with my assumption. Based on my research, all of the suggestions on
the matter included one of the following approaches:
Disk Utility UI
- Open Disk Utility
- New > Disk Image From Folder…
- Select the folder you extracted the SteamOSInstaller.zip to
- Choose the hybrid-image Image Format
Disk Utility CLI
- Execute
hdiutil makehybrid -o PATH_TO_ISO PATH_TO_FOLDER -iso -joliet
from a terminal
Both of these worked at face value, an ISO was created from the zip file. But unfortunately, whenever you tried to
follow the installation guide and use the ISO, you were very quickly faced with the following error message:
error: "prefix" is not set
. There was no recovery from this. You could wait forever and VirtualBox would show
you the same screen. Heck, I even tried booting into the EFI shell and loading the ISO manually to see if I could
somehow work around the situation.
Originally I had given up, I didn’t want to waste anymore time on it and no one seemed to have an answer. And then
today, for some odd reason, I tried it again, some weeks later since my original attempts. I had hoped there was a
bug in the early release and that it had been fixed by now. I rebuilt my ISO and tried again but to no avail, I got to
the same place with the same results. Frustrated, I googled and came across the same posts. It seems nothing has
changed.
I asked myself: What is the difference between my attempt and the documented working attempts? That’s when it
occurred to me that maybe the ISO being created using the options above didn’t create a proper ISO. Seems logical
since that is the only deviation I’ve made from the documentation. So I went on the hunt for an installation guide
that used a tool to create the ISO that I could get installed on my Mac and that’s when I found that xorriso
was available via Homebrew. After installing it, I was able to use the following command to create a
working SteamOS Installer ISO that works flawlessly via VirtualBox:
xorriso -as mkisofs -r -checksum_algorithm_iso md5,sha1 -V 'Steam OS' \
-o ../SteamOSInstaller.iso -J -joliet-long -cache-inodes -no-emul-boot \
-boot-load-size 4 -boot-info-table -eltorito-alt-boot --efi-boot boot/grub/efi.img \
-append_partition 2 0x01 boot/grub/efi.img \
-partition_offset 16 .
Note: The assumption here is that if you were to extract the zip file to a folder, you’d run this command while
within the folder, otherwise you’ll need to alter the paths accordingly. Also, feel free to change the -o
option to
change the name and location of the created ISO file.
That pretty much wraps things up. I’m excited to play around with SteamOS and while it was a pain to get started, due
to the dreaded error: "prefix" is not set
, I’ve finally been able to get past this using the information above. I
hope this information helps you other Mac users avoid the pain I originally did.
Goodbye 2013
2013 was an interesting year for me. In many ways, I was blessed by it but at the same time, I found myself in a rut
most of the year. If it wasn’t Apigee related or family related, I wasn’t able to be as
productive or as creative as I would like to be. This has been great for my career at Apigee but I feel 2013 left me
feeling a bit frustrated. The good news about 2013 is that I feel like I finished the year reflecting and learning
things that should make 2014 better. For example, I realized I spent most of 2013 struggling with time management.
When you have an insatiable desire to create things, a lack of time management and an inability to prioritize can
make life very frustrating. I feel like I have so many cool ideas but I’ve struggled bringing them to fruition.
In 2013, I spent a lot of time validating my ideas and researching technologies to help me bring my ideas to light. To
bad that was all I did. Other than contributions to existing OSS softwares, I only completed one personal project of
my own in 2013, a JavaScript Library for Carvoyant. Just about all the time
I had, or made available, for personal projects in 2013 was wasted thinking about ideas instead of creating them.
I leave 2013 humbled, on many levels, but very grateful for having been able to learn from my mistakes.
Hello 2014
In 2014, I hope to blog more often. I feel like if I want to share more publicly, I’ll have to manage my time better
to allow for it, and to do it right. I also feel that there are some really cool things out there I’d like to try out
this year instead of just reading about them. My experiments could easily become good reads if I make the time to do
them right.
I’d also like to include some personal entries. Typically I’ve keep my writings to be about programming related stuff,
like cool Emacs snippets or examples on how to use OSS softwares/libraries. But I find myself in a situation where I
feel like I have more to give than this. Sure, I hope to continue offering the same things I always have, hopefully
better done than before, but I also think I have a perspective and some experiences that might be important to others.
We’ll just have to see how that goes…I might be writing about this in 2015 as a bad idea.
I enter 2014 hopeful, with knowledge of my short comings to help get beyond them so that 2014 can be better than 2013.
The past month I’ve been using an open source project called
Dropwizard. Dropwizard is a self-described as being a “Java framework for
developing ops-friendly, high-performance, RESTful web services”. Dropwizard is an awesome piece of kit that bundles
best of breed Java tooling like Jetty,
Guava and Jersey. Speaking of Jersey, this is
what I’d like to talk about today, specifically about how Dropwizard exposes the ability to create your own Jersey
ExceptionMapper and how the built-in Dropwizard ExceptionMappers might cause you some grief, with a workaround.
What is an ExceptionMapper?
Jersey, or should I say JAX-RS, exposes a mechanism that will allow you to map a thrown
Exception or Throwable to a REST response, instead of being unhandled and being presented to
the user as some stacktrace or error text. (This mechanism requires than you implement the generic
ExceptionMapper interface and then register it.) This is excellent for REST APIs that like to
return errors back to the client as part of using the API, like returning a JSON representation
of an Exception that can be parsed and handled on the client.
Custom ExceptionMappers in Dropwizard
My initial impression of Dropwizard in the context of Jersey and needing to register custom ExceptionMappers was very
positive since Dropwizard exposes an API for registering ExceptionMappers. Here is a very brief example for those of
you looking to register your custom ExceptionMapper within Dropwizard:
package org.thoughtspark.dropwizard.app;
import org.thoughtspark.dropwizard.app.ApplicationConfiguration;
import org.thoughtspark.dropwizard.app.GenericExceptionMapper;
import com.yammer.dropwizard.Service;
import com.yammer.dropwizard.config.Bootstrap;
import com.yammer.dropwizard.config.Environment;
/**
* Example Dropwizard {@link Service}.
*/
public class ApplicationService extends Service<ApplicationConfiguration> {
/**
* Entry point for running this services in isolation via Dropwizard.
*
* @param args the arguments
*/
public static void main(String[] args) throws Exception {
new ApplicationService().run(args);
}
/**
* {@inheritDoc}
*/
@Override
public void initialize(Bootstrap<ApplicationConfiguration> bootstrap) {
bootstrap.setName("application");
}
/**
* {@inheritDoc}
*/
@Override
public void run(ApplicationConfiguration applicationConfiguration, Environment environment) throws Exception {
// Register the custom ExceptionMapper(s)
environment.addProvider(new GenericExceptionMapper());
}
}
The GenericExceptionMapper being registered will handle all Throwables thrown and return a JSON payload representing
the error and its message.
Dropwizards Secret “Gotcha”
Everything was going great until I started using Dropwizard
Validation. I noticed that whenever my bean validation
failed, instead of seeing a JSON payload of my validation exception, I was always seeing an HTML version of the
exception…almost as if I never registered my custom ExceptionMapper, or maybe my custom ExceptionMapper just wasn’t
working. Seeing that all Exceptions extend Throwable, I didn’t see how my ExceptionMapper wasn’t configured properly
so I dropped into the debugger.
After some looking around, I see that the actual exception being throw was of type InvalidEntityException. At this
point, I created a new ExceptionMapper specifically for the InvalidEntityException, restarted Dropwizard and it
worked! Instead of the HTML responses for InvalidEntityExceptions, I saw my JSON representation. Everything was
working great…that is until I restarted the server for a different reason and I noticed that the
InvalidEntityExceptions had gone back to HTML. I knew I hadn’t changed anything related to the ExceptionMapper so I
started debugging. After being unable to get the debugger to hit any break points in my ExceptionMappers I started
looking into the Dropwizard sources, thank goodness for open source software, and that is when I saw something,
Dropwizard is registering its own ExceptionMapper for the InvalidEntityException. What was still bugging me was why my
ExceptionMapper worked once and upon server restart it stopped working, without any changes to my code. Once again I
found myself in the bowels of Dropwizard’s source and that’s when I found my problem.
Dropwizard is adding its custom ExceptionMappers into Jersey’s singletons Set, a Set that does not guarantee order.
This explains why one time my ExceptionMapper would work and another time, the built-in Dropwizard ExceptionMapper
would work. Now that we know the problem, below is one way to work around the problem:
package org.thoughtspark.dropwizard.app;
import org.thoughtspark.dropwizard.app.ApplicationConfiguration;
import org.thoughtspark.dropwizard.app.GenericExceptionMapper;
import com.fasterxml.jackson.jaxrs.json.JsonParseExceptionMapper;
import com.sun.jersey.api.core.ResourceConfig;
import com.yammer.dropwizard.Service;
import com.yammer.dropwizard.config.Bootstrap;
import com.yammer.dropwizard.config.Environment;
import com.yammer.dropwizard.jersey.InvalidEntityExceptionMapper;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
/**
* Example Dropwizard {@link Service}.
*/
public class ApplicationService extends Service<ApplicationConfiguration> {
/**
* Entry point for running this services in isolation via Dropwizard.
*
* @param args the arguments
*/
public static void main(String[] args) throws Exception {
new ApplicationService().run(args);
}
/**
* {@inheritDoc}
*/
@Override
public void initialize(Bootstrap<ApplicationConfiguration> bootstrap) {
bootstrap.setName("application");
}
/**
* {@inheritDoc}
*/
@Override
public void run(ApplicationConfiguration applicationConfiguration, Environment environment) throws Exception {
// Remove all of Dropwizard's custom ExceptionMappers
ResourceConfig jrConfig = environment.getJerseyResourceConfig();
Set<Object> dwSingletons = jrConfig.getSingletons();
List<Object> singletonsToRemove = new ArrayList<Object>();
for (Object s : dwSingletons) {
if (s instanceof ExceptionMapper && s.getClass().getName().startsWith("com.yammer.dropwizard.jersey.")) {
singletonsToRemove.add(s);
}
}
for (Object s : singletonsToRemove) {
jrConfig.getSingletons().remove(s);
}
// Register the custom ExceptionMapper(s)
environment.addProvider(new GenericExceptionMapper());
}
}
In the code above, I remove all Dropwizard ExceptionMappers so that I have complete control over how my application
renders Jersey Exceptions. Now no matter how many times I restart the server, my custom ExceptionMapper will be used
and I can always expect JSON to be returned for Exceptions thrown on the server. Of course, you might need to change
the approach above based on your needs but for this scenario, I just wanted any ExceptionMapper that Dropwizard
provided to be done away with so I could use my custom versions that returned JSON instead of HTML.
Conclusion
Dropwizard is awesome and anytime I have to write Java-based REST servers, I’ll be using it. I do question the
built-in ExceptionMappers, especially with their inability to be configured to output something other than the
hardcoded HTML, but with the workaround above, I don’t have to be stuck because of them. Please do not let this take
away from Dropwizard and if you get tired of having to use the workaround above, I’m sure the team would welcome a
patch…if you beat me to it.
I was going through my Google Reader stream today when I came across a thread that
bothered me: Borderlands 2’s Writer Says He’ll Change Tiny Tina If She Conveys Racism, As Some Players Think.
After reading the thread, I was shocked at the accusation myself. I remember playing Borderlands 2
with a good friend of mine a few days after the release and not only enjoying the Tiny Tina character but laughing so
hard I was crying. I love Tiny Tina and I just cannot see how she or the person behind her
(Anthony Burch), could be considered racist. I would had left it at a simple
reply to Anthony on Twitter supporting him and Gearbox Software but feeling that
there is so much more to this than 140 characters can say, I figured I’d weigh in here.
What is Racism?
Racism, as defined by Merriam-Webster, is as follows:
Any action, practice, or belief that reflects the racial worldview—the ideology that humans are divided into separate
and exclusive biological entities called races, that there is a causal link between inherited physical traits and
traits of personality, intellect, morality, and other cultural behavioral features, and that some races are innately
superior to others.
That being said, where in the description above does it say that someone of one race using the lingo of another
race is racism, or that a race could even impose ownership of lingo? The idea of saying that Tiny Tina’s character
is racist because she uses “black lingo” is a joke, although not a laughing matter because it’s ridiculous
accusations like this that fuel the controversal matters of racism. Could you imagine calling
Eminem, one of the most talented rappers ever to grace a microphone, a racist because he’s
white and uses “black lingo” in his raps?
My Opinion
In my personal opinion, I think the idea of suggesting that lingo could be owned by a race could be considered racist.
I mean, suggesting that people of a certain race are the only ones to use certain words is a race-based stereotype, much
like what is described above. Having grown up in Georgia myself, I knew quite a few white girls/guys that used
black lingo and black girls/guys that used white lingo. To us, they were just words. As long as you stayed away
from the derogatory words/phrases commonly referred to as racial slurs, you used whatever verbiage best fit who you
were with and the context of the conversation regardless of which race used the verbiage most or coined a particular
word/phrase.
What did Gearbox say about the matter? Here is Gearbox President Randy Pitchford’s
response on Twitter:
@reverendanthony tina is not racist because you are not racist. You’re a pillar
of tolerance and inclusion.
In Conclusion
Borderlands 2 is an awesome game and Tiny Tina is one of my favorite game characters of all times. Like many others, I
find her hilarious and did not even think of the race card while enjoying the parts of the game she was in. I think
Anthony did an excellent job making Tiny Tina quirky, unique and memorable, all the things you’d want from a gaming
character. I also applaud his handling of the situation linked to above. In the end, I wish him the best and I hope
that people can stop trying to make something out of nothing, the world is already destructive enough without outlandish
accusations about very sensitive subjects such as racism.
Let’s make it official, in case you didn’t hear on Twitter,
I’ve signed a contract with O'Reilly Media to write a book about
Underscore.js. How did I get into this awesome situation you might ask? Well, back in
October Jeremy Ashkenas posted
on Twitter saying that if you were interested in writing
a book about Underscore.js, let him know. I submitted my proposal and it was accepted. WOOHOO!!! Needless to
say, I’m very excited about this opportunity and I’ll do my best to make sure this thing happens.
Any Suggestions?
I’m not looking to have my name on the cover of just any book, I want this book to be the best possible. That being
said, I’d love to hear anything you think that could help make the book awesome. To submit your suggestions, leave
your requests/suggestions in the comments below.
Progress
I’ve already started work on this book based on the proposal sent to O'Reilly. As I make progress, I’ll keep you guys
up to speed here on ThoughtSpark.org. Feel free to reach out to me on Twitter or
in the comments with anything pertinent to this effort.
I’ve decided it’s time to rethink how I maintain and deploy ThoughtSpark.org. My
current deployment model is to use Drupal to craft/host my site’s content and I currently
pay a small monthly fee to GoDaddy for hosting Drupal. While there isn’t anything
really wrong with my current model, I’ve grown tired of it. Below are a few pain points worth mentioning.
Maintenance Overhead
I’ve grown tired of maintaining Drupal. I’m tired of applying a security patch or verision update and having my
whole site turn to crap. Why? All of my modules then need to be re-enabled and/or updated. How is this a
problem? All non-core functionality on my site (sitemap.xml generation, SPAM filtering, syntax highlighting, …)
are all enabled via modules. If these modules are disabled during the update, and they are, I now have to go through
the process of re-enabling them just so my site doesn’t look like crap and I don’t get SPAMed to Hell and back.
Don’t get me wrong, Drupal is a phenomenal product. It’s an excellent
CMS and Drupal is also a great example of what an
OSS project should be. It’s not Drupal’s fault that I don’t need a
CMS and I’m sure there is a reason that the update process is more
painful than I’d like.
Another aspect of the maintenance overhead is the fact that Drupal runs on PHP and needs a
database, MySQL in my case, so you have to either host your server or pay someone to host it
you. I chose the second option. The overhead for this of course is the financial cost, regardless of how large or
small it is.
Authoring
The options I’ve been exposed to in Drupal for authoring content on my site is to use raw
HTML or using one of the
WYSIWYG editors. The problem with raw HTML is it’s cumbersom and
error prone. Crafting a single post can often end up with a lot of time spent finding HTML typos. The problem with
WYSIWYG editors is that you often end up fighting them. Either the output is junk or they don’t handle certain use
cases, like handling code blocks. Regardless, I loathe creating content in Drupal but again, I don’t feel it’s
Drupal’s fault, I just want something simpler.
One approach to creating web-based content I’ve become very fond of as of late is
Markdown. Markdown is a great language that allows me to focus more
on the content being crafted while still being able to style my content very easily. I can even drop in raw HTML
wheenver I feel the need to. If you’ve ever visited any GitHub project/user homepage or
a project’s wiki, you’ve seen the result of Markdown.
The Solution
The new ThoughtSpark.org will no longer be using Drupal and will no longer be deployed on GoDaddy. Instead, I’m going
to use a static website generator that will take my Markdown files and create my website. The tool I will be using is
Middleman and I will be using GitHub Pages as my host. Not
only will writing/maintaining my website content be easier but now it will also require no cost to host. Those two
things are good enough reasons for me to switch but there are also the following reasons that are equally compelling:
- Security: With there being no server-side component and no server-side processing, there are much fewer security
issues that I need to concern myself with
- Performance: With there being no server-side component and no server-side processing, the performance of the site
will be faster
- Deployment: The solutions for hosting static websites are plentiful and you are no longer locked into a
particular host/product for hosting your website. (There’s a chance you’re already using a
service that will host your static websites for you. Examples: GitHub’s Pages and
DropBox are two excellent examples.)
In Closing
GitHub Pages, Markdown, Middleman and Twitter Bootstrap have made it very easy
for me to re-create and maintain ThoughtSpark.org. I feel like with this new approach for ThoughtSpark.org, I’ll be
able to get posts out quicker and much easier, while saving a few bucks along the way. Thanks for your patience and I
look forward to sharing with you on my new platform.
Note: There are a few things left to finish before I’d say that the migration is complete, follow
here if you’re interested.
Note: Originally I had planned on migrating all of the old Drupal posts to the new platform. I’ve decided against
it for a few reasons and will instead only migrate things upon request. To request such a thing, use the
issue tracker or hit me up on
Twitter.