The Black Hole of Recruiting, part II

I previously wrote about the black hole of submitting job applications electronically.

The Wall Street Journal explains how that phenomenon occurs.

To cut through the clutter, many large and midsize companies have turned to [automatic] applicant-tracking systems to search résumés for the right skills and experience. The systems, which can cost from $5,000 to millions of dollars, are efficient, but not foolproof.

Efficient, but not foolproof? Is that a way of saying “Fast, but wrong”?
If a system very efficiently does the wrong thing, well it seems like not a very good system.

WSJ goes on to say,

At many large companies the tracking systems screen out about half of all résumés, says John Sullivan, a management professor at San Francisco State University.

All well and good. I understand that it’s expensive to even evaluate people for a job, and if a company gets thousands of resumes they need some way of managing their way through the noise.

On the other hand it sure seems like a bunch of good candidates are being completely ignored. There are a lot of false negatives.

It sure seems to me that the system isn’t working, the way it is now. Companies can rely on recruiters, but that can be very expensive. They can rely on online job boards, but that’s a frustrating recipe for lots of noise.

Thankfully though, not all companies rely solely on online job submission forms, in order to source candidates.

Job Postings are so 1990s

So says ongig.com regarding online job postings and online job applications.
I agree. Especially with this part:

Candidates often feel that there is a “black hole” in the application process.

I’ve electronically submitted applications to several larger companies; the experience has been pretty uniformly non-transparent.  Unsatisfying to say the least.  I understand why: it’s expensive wading through the stacks of online resume submissions. But if it’s so expensive, why solicit online applications at all?

As for e-hiring, jobs are not a commodity, and it’s inappropriate to advertise them in a catalog as if they were a book or a replacement part for my lawnmower.

I’m no longer comfortable  joining “a company”. I need to meet and evaluate the hiring manager, I want to know the team strategy, and how it synchronizes with the corporate strategy. I want a 360° view of the team, and I need to be confident that the team leaders understand the strategy and how they contribute to success. If key players on the team are not comfortable with strategic thinking or are reluctant to discuss the strategy options, it’s a red flag to me. All that is hard to do online. You can start online, but you need to progress quickly to richer conversations.

The internet is fascinating and wonderful but, in case you needed it, this is yet another illustration that it cannot replace all human interaction.

Azure gets a well-deserved REST

In case you had any doubts of Programmable web’s data showing REST dominating other web API protocols, Microsoft, one of the original authors of SOAP, is fully embracing REST as the strategic web protocol for administering and managing Windows Azure services.

From Gigaom:

The new REST API that controls the entire system is completely rewritten, sources said.  ”Prior to this release, the Azure APIs were inconsistent. There was no standard way for developers to integrate their stuff in. That all changes now,” said one source who has been working with the API for some time and is impressed.

If you had 2 hours to spend learning stuff about web API protocols, spend 3 minutes understanding SOAP, and the balance on REST.

 

Windows Azure goes SSD

In a previous post I described DynamoDB, the SSD-backed storage service from Amazon, as a sort of half-step toward better scalability.

With the launch of new Azure services from Microsoft, it appears that Microsoft will offer SSD, too.  Based on the language used in that report — The new “storage hardware” is thought to include solid state drives (SSDs) —  this isn’t confirmed, but it sure looks likely.

I haven’t looked at the developer model for Azure to find out if the storage provisioning is done automatically and transparently, as I suggested it should be in my prior post.  I’ll be interested to compare Microsoft’s offering with DynamoDB in that regard.

In any case, notice is now given to mag disk drives: do not ask for whom the bell tolls.

Hadoop Adoption

Interesting. Michael Stonebraker, who has previously expressed skepticism regarding the industry excitement around Hadoop, has done it again.

Even at lower scale, it is extremely eco-unfriendly to waste power using an inefficient system like Hadoop.

Inefficient, he says!   Pretty strong words.  Stonebraker credits Hadoop for democratizing large-scale parallel processing. But he predicts that Hadoop will evolve radically to become a “true parallel” DBMS, or will be replaced.  He’s correct in noting that Google have moved away from MapReduce, in part.  Stonebraker describes some basic architectural elements of  MapReduce that, he says, represent significant obstacles for a large proportion of real-world problems.  He says that existing parallel DBMS systems have a performance advantage of 1-2 orders of magnitude over MapReduce. Wow.

It seems to me that, with Hadoop, companies are now exploring and exploiting the opportunity to keep and analyze massive quantities of data they had previously just discarded. If Stonebraker is right, they will try Hadoop, and then move to something else when they “hit the wall”.

I’m not so sure. The compounded results of steady development over time can bring massive improvements to any system. There is so much energy being invested in Hadoop that it would be foolhardy to discount its progress.

Companies used to “hit the wall” with simple so-called “2 tiered” RDBMS deployments.  But steady development over time, of hardware and software, has moved that proverbial wall further  and further out. JIT compilation and garbage collection used to be impractical for high-performance systems.  This is no longer true. And the same is true with any sufficiently developed technology.

As I’ve said before on this blog, I don’t think Hadoop and  MapReduce are ready today for broad, mainstream use.  That is as much a statement about the technology as it is about the people who are potential adopters.  On the other hand I do think these technologies hold great promise, and they can be exploited today by leading teams.

The big data genie is out of the bottle.

HTTP apps? REST? JSON? XML? AJAX? Fiddler is invaluable

For developers, having access to and knowing how to use the proper tools is invaluable.  For any sort of communication application, I find Fiddler2 to be indispensable.  It is an “HTTP Debugging Proxy”, but ignore that – the main point is that it lets a developer or network engineer see the HTTP request and response messages, which means you can get a real understanding of what’s happening.  It’s WireShark for HTTP.

As an added bonus, in sniffs SSL traffic, and can transparently display JSON or XML in an appropriate human-friendly manner.

The name Fiddler refers to the ability to “fiddle” with requests as they go out, or responses as they arrive. This can be really helpful in trying what-if scenarios.

Free, too.  Thanks to EricLaw for building it.

I want to point out: this tool is not commercial, there’s no training course for it, there’s no vendor pushing it. It illustrates that developers need to train themselves, to stay current and productive. They need to keep their eyes open and add to their skills continuously, in order to stay valuable to their employers.