X-Raying for NOT Job Hoppers

booleanstrings Boolean 2 Comments

Recruiters who place highly qualified full-time employees always scan resumes and profiles to see if the person is a “job-hopper”. Most employers assume we won’t be bringing people for interviews if they changed jobs too often in the past for no good reason. We do, too.

However – not too many search systems offer a chance to search for non-job-hoppers. LinkedIn is not an exception. Paid accounts can show the length of the current role and stay at the current company (potentially changing roles), but we can’t query the lengths of past jobs.

X-Raying LinkedIn is tricky, but we can search for any words on a public profile. The profiles have job lengths phrased as “xxx years yyy months“. We can take an advantage of that!

Here is an example search for people who stayed at least 3 years at each job: Example search. I am using a template: [ -year -“2 years” months “years” ].

Google numrange operator comes in handy in searches involving years of experience. Two periods (..) stands for numrange on Google:

  • 3..7 means any number between 3 and 7
  • 8.. means any number that is 8 or larger

Here is a search for job hoppers (not sure why someone will search for those, but someone has recently asked this question on one of Facebook Recruiter groups):

site:linkedin.com/in OR site:linkedin.com/pub -pub.dir “year” months -“2.. years”

Turning this search logic around – we can look for people who have a demonstrated job stability – their jobs lasted 3 years or more – but haven’t stayed at any job longer than 8 years:  Example.

I have used the template: [-year -“2 years” “3..7 years” -“8.. Years”].

Our presentation on Overcoming LinkedIn limitations was sold out. You can get a recording at the Training Library. We’ll repeat the webinar as soon as our schedule allows; stay tuned!

And here is a question for you: how would you X-Ray for people who do not have a current job? (Hint: it’s easy).

 

 

 

Hidden LinkedIn Interpretations

booleanstrings Boolean Leave a Comment

LinkedIn’s Big Data puts the company in a unique position to create a system of organizations, job titles, skills, and the term relationships – which it used to have ambitious plans to do. I hope they will pick it up! But unfortunately, in the last few years, we are seeing somewhat weak and inconsistent attempts to figure out the data and provide intelligent, semantic search and browsing.

There are apparent LinkedIn limitations, such as:

  1. Commercial search limit for those with a free account – that is quite serious. (We know of a “hack” to overcome that, but it’s not available to everyone).
  2. Inability to search by a group membership and by a zip code and radius in premium accounts. (We know ways around that and will be teaching it shortly).

But I would say that the “worst” LinkedIn limitation, depriving us of matching search results or showing false positives, is its ongoing half-baked interpretation of our search terms.

If we search for vice president, should we expect LinkedIn to find VP and V.P. as titles? Let’s take a look at a few test searches.

The strange numbers of results above come from interpreting.

* a quote by the Python software language creator, Guido van Rossum.

Clumsy term interpretations that we are experiencing on LinkedIn happen because of the

“hidden limitations of underlying abstractions.”

that Guido is talking about. The software attempts to make sense of professional data and provide semantic search – at least a semantic “flavor.” But the interpretation is rarely obvious, has pretty much never been documented in LinkedIn’s Help, and the algorithms change a lot (they changed three times in the last month by my count – each time altering the search results for some queries).

To make its users even more confused, LinkedIn interprets our search terms differently, depending on the account – personal, Sales Navigator, or LinkedIn Recruiter. That results in mismatching numbers of search results across accounts. Sometimes, Recruiter gets more results (but not necessarily the results we want); at other times, the personal account (OR Boolean search) “wins.”

I do hope things will improve. In the meantime,

Changes to back-end algorithms affect all of us, while the changes are hidden from us.

Enough confusion! We’ll go over the hidden limits and straighten it out in the double-webinar “Overcoming LinkedIn Limitations” next Wednesday.

 

 

 

Programming Languages and IT Sourcing Pitfalls

booleanstrings Boolean Leave a Comment

These are the top fifteen programming languages on Github, the top site where engineers collaborate on creating software. Scroll down on the advanced search dialog  and you will see the lo-o-o-o-ong choice of the languages, starting with the 24 most popular, then, listing “everything else”:

Github also offers to search for languages using the language: operator instead of the menus. You can type language:python in the Github search box. Some languages that you may have never heard of, exist on Github. For example, Github has a sizeable population writing in a language called Julia:

And here is where I want to warn you.

Pitfall #1

It seems that we can search for any language you like. But in reality, we can only search for standard languages on Github. To clarify, in this case, “standard” means that the language has to be in the drop-down menu in the advanced search dialog. You can search, for example, for language:HTML5 – and you will see no results because HTML5 is not a standard language name. No results may puzzle you. But a worse mistake is to search for a non-standard language along with a location. It such a case, Github will ignore your language: operator – and your results will not match what you want. Example: compare language:HTML location:”new york” and language:HTML5 location:”new york” – the latter search ignores the language: operator and just gives us everyone in “New York”.

Understandably, many of us make this mistake until we look closer. Because of this behavior, it may seem that we can search for a combination of languages, but…

Pitfall #2

Github “ORs” the languages we enter into search, i.e. it will look for everyone who writes in one language or another; here is an example. AND is not supported on Github. There is no way to search for members who write in two or more languages. You can do so in Social List, but not on Github.

This blog post on Lever about Recruiting Developers on GitHub has some good advice but it mistakenly assumes that we can search for language:”CSS AND HTML”. No, we can’t. It’s an honest mistake and is hard to catch because many results show up, but the results are not what you think!

As David Galley says, “In Sourcing, question everything.”

Don’t have the time to figure out all the search subtleties on Github and other channels to find Developers? Come join me for the fully-reworked webinar

“How to Find and Attract Technical Talent”.

Date: Wednesday, October 25

Time: noon Eastern (recordings are provided to all)

Since I used to be a”techie” (in a past life), I will add hints on sourcing and recruit “from the candidate’s side” to the training, derived from my own experience.  I look forward to sharing the material with you!

 

 

 

 

What Did The Machine Learn?

booleanstrings Boolean 4 Comments

Have you seen the heated Facebook discussions where our colleagues suggest the percentage of a Sourcer’s research work that will be soon automated – anywhere from 5% to 80%? Some say that we are in a dying profession. The future will show, but I am currently with the “5%” crowd. I do agree that some other jobs will change or go away as machines “replace” people. Some other types of jobs will be created too. But the Sourcer jobs and functions are not going away.

What is Machine Learning? Simply speaking, we have two types of “objects” – for example, job descriptions and resumes. We feed this type of info into the computer:

– Resume1 matches JobDesription1
– Resume2 matches JobDesription1
– Resume1 does not match JobDesription2
– Resume2 matches JobDesription2
– Resume3 does not match JobDesription2
(etc.)

Here Resume1, JobDesription1, etc. are just blobs of data (representing the content of resumes and job descriptions). Inside the computer, the data looks like this: 00110011100011000010… It’s hard to imagine that a human would learn anything from staring at strings of 1s and 0s. But research and real-life applications show that, in selected situations, having been “fed” enough data, the computer learns and can start performing matching on its own.

From testing a number of recruiting matching systems, I can say that we are currently far away from automatic matching resumes to jobs correctly. As part of a research project for a client, my partners and I  reviewed a sample of 100 resumes matched to several job descriptions by three leading software systems. Our study revealed that all three performed equally badly. In most “matching” cases we could guess a reason for matching (such as a keyword), but only about 3% of the matches sounded right. (Of course, we are picky, but still…)

There are good reasons though why ML-based systems are not matching resumes against jobs well (yet?).

One very simple reason, that I haven’t seen discussed much, is the difficulty of parsing the data in recruiting matching systems. People are bad at writing both job descriptions and resumes. (Know what I am saying?) The machine needs to do some heavy deciphering; it can use some other data, such as a dictionary with term synonyms, but the task is hard. It could be that it requires more data for machines to learn than most current matching systems have. (LinkedIn would be in a position to do matching, given the amount of data, but they are behind others). 

When working recruiting matching happens – in certain areas and industries first – we will be facing a new challenge. Many are worried about machines potentially learning discrimination and matching algorithms needing to be audited or combined with some “anti-bias” algorithms. But even more broadly, when an ML-based hiring works fine on its own, sometime in the future, how will we get some human understanding of the reasons for their decisions? As an article from MIT Technology Review says that we need “ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise, it will be hard to predict when failures might occur”.

There are interesting efforts to make machines “explain themselves” at DARPA. I copied the image above from a DARPA research paper, which I recommend reading if the subject interests you.

Making machines tell us “what” they learn is a fascinating research topic. It is also of practical importance for the future of those areas in our industry where we do apply automation. And, learning and controlling what automated systems do will continue to require our human presence.

 

 

 

 

 

 

Do Not Procrastinate – Refresh Your Recruitment Data

booleanstrings Boolean 3 Comments

Is Recruitment Data in your ATS (Applicant Tracking System) outdated? The answer is “yes” (or “yes, unfortunately”) for the vast majority of us. We also realize that updating the records would be beneficial, because:

  • People with whom we were in touch or who applied in the past (those in our ATS) are more likely to respond if we contact them
  • With the updated data, we will be finding more relevant results

It is also useful to populate, or “enrich”, our records with social profile links so that we have references to some (likely) up-to-date information anytime we access each record.

Yet there are always more urgent things to do for busy Recruiters than cleaning up the database; many of us keep going with outdated records. The outdated information slows us down, and refreshing is harder as the information gets outdated along with the systems that keep it. It is best to take care of updating your records sooner rather than later! We will cover the topic in detail in a webinar on data refreshing.

One type of tool to consider for massive updating is usually called bulk refreshing, primarily used in Email Marketing and Sales, and, I think, should be used in Recruiting more than it is. Tool examples include Clearbit, FullContact, Pipl, and Hunter.io. These tools offer APIs for use by Developers; however, non-coders can access mass-refreshing simply via uploading and downloading contact files in Excel.

Which tool fits someone’s particular needs (and budget), requires investigation and some trial runs. But regardless, enrichment tools can be of big help to us even before we order to refresh a list. This is due to “batch previews”, where we get to see some information about our lists. (This is a sourcing hack by the way, right here). We at Sourcing Certification especially like Clearbit Batch Preview that shows some characteristics of your list, seen in the screenshot below, for free (at which point you can decide whether to pay).

I would like to invite you to learn about tools, methods, and pitfalls of Recruitment Data Refreshing by studying our 90-minute webinar – you can find it at https://sourcingcertification.com/datarefresh/.

 

 

 

The Opposite Bug in LIR

booleanstrings Boolean 8 Comments

 

Two days after I published an astonishing discovery on the space ” ” providing extra results, LinkedIn Recruiter quietly changed its search algorithm – again! (Big thanks to several colleagues who tried the searches, no longer saw the same results as I had posted, and alerted me). Could be, LinkedIn fixed the LinkedIn Recruiter problem? After the change, both examples in the post “Spaced Out!” returned the same number of results.

Unfortunately, it is too early to celebrate. The new algorithm has brought in new bugs. Let’s look at one of them. In this example, an “object” search, i.e. a selection of a standard job title produces many more results than “Boolean”, i.e. plain keyword search. Compared to the way it was a few days ago, we can call it “the Opposite Bug”.

If you would like to reproduce this, search for: current title = Tax Specialist (selection or keywords), location = Greater New York Area, industry=Financial Services, and keywords=”corporate tax”.

If you think that the “object selection” type of search now does better because it produces more results, try to look at the results closer. Apparently now LIR includes what it considers to be synonyms to the standardized job titles. But they are not synonyms, are they? Here are just three examples of the job titles included what LinkedIn thinks are synonyms to “Tax Specialist”:

1) Senior Program Manager, Film Tax Credit Program; 2) Finance Intern – Tax; 3) VP Tax Reporting.

Not impressive.

Once again, searching by selecting a standard job title, by selection, produces the wrong results.

Conclusion: the algorithm has changed, but Boolean still wins. Don’t forget to end your searches with a space ” ” as a shortcut to “communicating” Boolean to Recruiter.

 

Spaced Out!

booleanstrings Boolean 9 Comments

If you use LinkedIn Recruiter, I highly recommend at least skimming this post and the next.

LinkedIn Recruiter fooled me this time! I was searching for managerial level people, putting the word “manager” in the title and varying other parameters. In a while, I have started feeling suspicious about the number of results. It seemed unreasonably small. After a bit of investigation, and remembering how the “company names or boolean” search option is messy, I discovered this.

Adding a space ” “

After the job title

In LinkedIn Recruiter search

Multiplies results

I will explain what is going on there in a second, but let me share a couple of screenshots first. If you don’t care about the explanation, that’s all you want to see.

>>> Start adding a space after job titles and you will be getting tons of extra results.

Here is another example:

So why does the search behave in this bizarre way? This is a “side effect” of LinkedIn trying to make sense of their data (and not doing a great job there; pun intended). As part of the internal classification, LinkedIn has given numerical IDs to members, companies (responsible for this odd behavior in Recruiter), groups, skills, etc. Apparently, they gave numerical IDs to the “standard” job titles that they have identified – such as “manager”, “senior manager” (but not “mgr” or “Sr. Manager”).

For example, for the standard title “Manager”, the internal ID equals 2. For “Senior Manager”, ID=50. (In the back-end call, inspecting the call, we see “&jobTitleEntities=Senior+Manager_50” or “&jobTitleEntities=Manager_2”). A search for the standard title with an assigned ID, for example, “Manager”, selected from the menu of job titles, will pull out ONLY the profiles where the title is exactly “manager” (no more words) or somehow is tied to the standard “manager” title. (It’s a tricky business; I will go into better detail in a future post). A problem with searching for job titles with IDs, is that lots of profiles that have the word “manager” as part of the title, would not show in this kind of search. The added space ” ” helps – it switches the internal search to not use the IDs and search by the keywords instead (it will look like “&jobTitleEntities=Senior+Manager” or “&jobTitleEntities=Manager”); the latter will get us many more results.

Recruiters: ALWAYS use Boolean search in the job title in LinkedIn Recruiter or RPS. Don’t use the selection of standard titles – if you do, it can throw away up to 90% of matching results.

Job Seekers: Use everything standard on LinkedIn – that includes your job titles as well. Otherwise, recruiters, who trust the system, and did not read this post, will not find you.

LinkedIn: When will you get it right?

 

Update! Two days after this post was published, the discrepancy shown in the two screenshots above has stopped happening (which a couple of readers have noticed too). We will never know whether my post had influenced the LI Engineering Team to make the change, or whether it is a coincidence. Well, at least my examples from the two screenshots above now return the same results, which is already an improvement. I am happy that this post got outdated so quickly! Alas, other discrepancies show up in the new release. I am not yet able to explain the new algorithm, but, whatever it is, we are already seeing search inconsistency and bugs around the job title search.

Want to help me?

[Sourcing Challenge, for LIR Users] Come up with a right-sounding, backed by examples, and tested, algorithm on how LIR parses and searches for user’s input in the job title field. What is going on with searches by a title as an object with an ID? a search by Boolean? Is there any sort of “semantic” interpretation in either case? The first person who emails me the right answer will get a prize.

Don’t Save the String

booleanstrings Boolean 2 Comments

It’s funny that people in our industry would talk about Boolean Strings as if those strings were “heavy”, complex, and lasting. Just think “building a string”, “crafting a string” and “saved Boolean Strings”, “Boolean Strings storage”. Boolean Strings storage is serious business.

But you know what? Saving Google Boolean search strings is just like saving the sentences you say so that you can repeat them later. (Feel free to disagree).

Those of us with teenage children may have to repeat the same “string”, like “it’s time to get up”, “it’s time to get up”, “it’s time to get up”. (Can you relate?) But we don’t “reuse” things we say in real life. Knowing the language, we can phrase what we mean. Now, the “Boolean” language is simpler than any human language, so why spend the time saving and organizing “Strings”? (It’s not all black-and-white of course. Saving some notes on a search, or a long OR string of target companies, or sharing a search string with a colleague, as necessary, are all perfectly reasonable).

There is also an exception regarding the use of “saving Strings”, for novices – saving searches may help to learn the Boolean language. If we have just started to learn a foreign language, we may keep the top 10, or 300, expressions in a phrasebook or a language-learning app. But we can only say so much in a language until we learn it well enough to stop checking with “cheat sheets”.

Saved strings also don’t reflect the full scope of performed searches. There are always parameters and settings that are not reflected in the strings and may significantly affect the results. (Saving a search URL will take us to a closer – though still not “identical” – reproduction of a search.)

Why have I written the e-book “300 Best Boolean Strings” then? It is simple: the book is intended to explain how to search for a variety of social profiles and professional information, and the multiple strings are examples – they are not something to reuse. (The strings in the book are links you can follow, so the URL parameters are also “saved”.)

Expressions in a tourist phrasebook stay relevant for a long time. But Google search strings that produce desirable results change a lot. (Each new edition of the Book has required more than 25% of the queries to be rewritten.)

So here is a message to Saving Boolean Strings practitioners: consider dropping the practice, unless you have novices to train. If you do keep the Strings, remember that they are getting outdated as we speak.

 

How to Fight the Lack of Features in Recruiter

booleanstrings Boolean

Given the UI design for advanced people search dialog in LinkedIn Recruiter (that I would call user-unfriendly), there couldn’t possibly be a clean resolution for the vague “companies or boolean” field:

Indeed, if there is one word entered, which is a company name (like Apple), will it be looking for employees of that particular company (Apple) or for people from all the companies with this word (“apple”) in the company names? It is unclear from the UI. There is a big difference in the two searches, and we may want to do either. In fact, a basic FREE account conveniently has both capabilities – we can either search for a keyword in the company name or select companies:

Returning to Recruiter – if you select a company from the offered list in the “company or boolean” field, it will NOT search for the keyword, but will just for that company. Thus, it only duplicates the exact same functionality found in another corner of the same vast people search dialog.

However, when I search for a company name, I often want to include that same company, registered as a different company object on LinkedIn (perhaps due to a different location or division). Here is a (random) example of several entries in a LinkedIn’s company list that seem likely to be part of the same company:

If I go with the company choices, I would need to select each entry separately. If I only select the first entry, “Netrix”, I get only two results for members whose company is “exactly” Netrix.

Here is a hack that brings back this useful feature, company keyword search, to Recruiter. Use a Boolean string that looks like this. It is a choice between your keyword and something that never happens. Now we get many more results than two:

Problem solved!

Here is a sourcing challenge for my readers who also have LIR (Recruiter). Suppose we are searching by one keyword in the “company or boolean” field, and that word is not, by itself, a company name. How will the search be interpreted?

P.S. In response to Katie’s comment and question below, I have found the shortest string that would look for the keyword, not the company. Just add a space after the word, and get many more results! See below. (The troubling thing is, the same bizarre syntax rules apply to the Job Title).

You’ve heard of SourceCon Austin (Guest Post by Dave Galley)

booleanstrings Boolean

This is a guest post from my business partner, a brilliant Sourcer, from whom I learn every day, David Galley. David will be speaking at the upcoming SourceCon in Austin. If you are going, please say “hi” to him, and I certainly recommend attending his talk.

-Irina

Now get ready for

Purple Squirrel in a Curl

Wondering, “What the #@$%*! does that mean?” You are not alone.

I ask myself the same question on a daily basis, though usually in response to a jargon-laden job description or poorly written resume. That is when I am not suffering from total information overload. There are so many details to keep track of, so many potential candidates to pursue, so many places to look for them, and so many different approaches to searching!

Sometimes keeping track of the what and how (never mind the who) seems like a full-time commitment all its own. How can you avoid falling down rabbit holes chasing the latest in browser extensions, databases, and search strings?

The best solution I’ve found is to focus on asking the right questions, which is what my upcoming SourceCon presentation: “‘To AND, OR NOT to AND,’ Is Not The Question” is all about. Will you be at SourceCon? Don’t be shy, swing by and say hello! I’ll be demonstrating the right (and wrong) questions to ask in order to find your target candidates in Grand Ballroom A at 2:25PM CDT on Wednesday, September 27.

While you’re in the neighborhood, here are more fantastic George Boole track presentations for you to enjoy:

You can find the full schedule (including four more content tracks, some amazing keynote speakers, and more) over at the official SourceCon Austin 2017 site. (Psst! Need a ticket? Click here and use code ATX17DG.)

Here’s one for the road. You’ve heard of SourceCon Austin, now get ready for

It says, "I won't tell, that would be cheating."

(Consider the answer to that one your pre-SourceCon homework.)