Custom Search for Recently Updated Profiles

booleanstrings Boolean 1 Comment

Googling for recently updated profiles is a tricky business. However, Custom Search Engines, with their fascinating capabilities, allow us to set up sorting by date in the settings, making convenient UI to searching for pages that have been recently updated.

If you were wondering, search results for the same query, 1) sorted by relevance and 2) sorted by date, as a rule, will not be the same. It’s not that “sorting by date” just changes the results order – the results from the two searches may overlap but will be, for the most part, different. With the maximum number of results being 100, we can, for example, expect to get a total of 180 results from the same LinkedIn X-Ray sorted both ways.’

Here is a new CSE:

LinkedIn – Recently Updated Profiles

Its default (and the only option) is results sorted by date. Try to run searches on this CSE, compare with the relevance-sorted results on a LinkedIn X-Ray CSE, and see the difference!



X Marks the Spot

booleanstrings Boolean

The Twitter Advanced Search Dialog allows to search by city, for example, “San Francisco, CA” – but did you know you can search for a (Latitude, Longitude) location with a radius as small as 0.01 miles?

To find the (Latitude, Longitude) for a spot on the map, right-mouse-click in Google Maps and choose “What’s There?”. As an example, I have copied the numbers for a popular conference and trade-show site in San Francisco, Moscone Center. They are: 37.783256, -122.403952.

If, in the Advanced Twitter Search, we look for locations “near San Francisco, with a radius of 0.005 miles, we will see this syntax:

[ near:”San Francisco, CA” within:0.05mi ]

Now I can replace the City name in this Twitter search string by the (Latitude, Longitude). Let’s also add a keyword, Apple, to look for events involving Apple. Here is the resulting search:

This search is looking for tweets from Moscone Center with the keyword Apple – naturally, results are from Apple’s major developer conference held there, WWDC.

Want to get ready to Source in 2018? Check out our Tools presentation.

2017 Changes to Four Major Tools

booleanstrings Boolean

Long ago, in a previous life, I was interviewing at a start-up, and a tired interviewer, noticing that my degree was in Math, sighed and said: “Mathematics is great! The fundamentals stay the same, always. You can count on them not to change”. True! (Obviously, the amount of change at that company was overwhelming at the time). Axioms are true and stay the same, by definition – what we make computers do, on the contrary, is in constant flux. Not only our tools constantly change. Some software tools machine-learn from experience and rewrite their own code to adjust!

I’ll write a post about the new tool concepts, Machine Learning, and Artificial Intelligence in application to Sourcing soon, but this post is not it. Here I’ll run a quick overview of the recent changes in a few familiar tools and sites. I’ll be also updating the Tools page. A lot more Sourcing Tools, their assessments, and “how-to’s”, we will discuss at the once-a-year Productivity Tools Webinar next week (you shouldn’t miss it!)

Here are some notes for hands-on practitioners.

ZoomInfo (which was acquired this year) has redesigned its public profiles, leaving much less “stuff” to X-Ray. Still, we can “catch” email formats for larger companies by X-Raying or via the mentioned Custom Search Engine. With the new ZoomInfo UI, we all can see the names on the previewed records (which wasn’t the case before), but without a paid subscription, the search is too rudimentary.

Conclusion: Too bad about X-Ray – some may consider a subscription.

Indeed has redesigned public profiles, leaving people’s names out. You can only see the names when you are logged in. The change, of course, affects X-Raying. However, after a close study of the information that Indeed shares with Google, we don’t feel it’s important. (surprisingly, and quite differently from LinkedIn) doesn’t make a big effort to make public profiles indexed by the search engines. Some pages, such as jobs posted on Indeed, for example, are not indexed at all – by design!

Conclusion: do use Indeed (and its own search, which is excellent) as one of your sources.

(Side Note: another site that is “unfriendly” to X-Raying is Craigslist).

Starting sometime in 2018, we’ll need to message on Indeed via a subscription. Sounds fine. We can also source the contact info if necessary. It looks like the resume search remains free.

Github has removed public email addresses from logged-out profiles. (That presents a bit of a challenge for tools that scrape the addresses, such as People Aggregators – their data will be harder to update).  If you want to see public emails, log in; you will need to create an account just for that, but any “account” activity is non-existent (unless you write code). Google still “remembers” quite a few public emails that used to be on profiles but their number is diminishing.

Conclusion: combine X-Ray and the logged-in search.

This year, we got a zillion Github-related tools, including one that shows the “dominant” language.

LinkedIn is now Microsoft, but we are not sure yet what this means. And, LinkedIn is a site where pinpointing change is not easy and reporting once a year would be far not enough! Some of my posts this year examine how search algorithms worked at various given points, differently from one month to another. “Higher” paid products such as Recruiter, use a flawed design (“Boolean or company”, etc.) that is hard to digest for most users, myself included.

Conclusion: While searching on LinkedIn, just use Boolean where possible, to avoid confusion. Keep an eye on unexpected LinkedIn interpretations.

LinkedIn continues to expand its data – not just in new memberships. With the introduction of the “open to new opportunity” flag and data, posted content, job applications, etc., LinkedIn continues to acquire data and remains at the core of professional searches, globally. No way it can be “replaced” by another tool.

With (even paid) LinkedIn search powers, unfortunately, diminishing this year, we’ve figured out precise ways to X-Ray LinkedIn – and created the new tool Social List, that we are taking to a new level in January 2018.

What changes have YOU noticed in the tools and sites, as of December 2017, that would be useful to know about?

For tons of information about productivity tools old and new, please check out the Tools Class in our Training Library.




Fascinating: Custom Headline Search

booleanstrings Boolean

I must admit that until I started using the search technique that I am about to describe, I did not realize that a significant number of LinkedIn members customize their Headlines. I had expected most members to stay with the default Headline, which is <Job Title> at <Company>. Not true. It won’t be easy to estimate the percentage of customized Headlines but they are quite common.
What members put in Headlines often falls into one of these categories:

  • “I am hiring” <…>
  • “I am open to opportunities” <…>
  • <identifying the person’s desired – rather than actual – role>
  • <identifying the person’s skills – rather than just a job title>; here is an example:

On the captured profile, the job title is a plain “Software Engineer”, while the Headline tells us the person’s skills that stand out.

Clearly, it would be to our advantage if we could search for keywords in LinkedIn Headlines only. We would be finding additional leads via the Headline search. However, LinkedIn does not provide this type of search to any of its account holders – LinkedIn Recruiter included. Perhaps they haven’t thought of that.

X-Raying for profiles on Google may be an approximation of this capability since Google will give higher search results ranking to profiles where keywords appear in Headlines. Still, as we all know, X-Raying is way imprecise.

Here is where a Google Custom Search Engine and some special operators can shine. It turns out that we can search for keywords precisely in a LinkedIn profile Headline any time we use a Custom Search Engine. To achieve that:

1) Create or find an “X-Ray” CSE. Here is one I have just created: LinkedIn Smart X-Ray.
2) Use special Boolean operators, unavailable in, narrowing the search to Headlines only.

Here are example searches that demonstrate the “search in headlines only”operator format.

  1. “Open to new opportunities” – more:p:person-role:open*to*new open to new
  2. “I am hiring” – more:p:person-role:i*am*hiring am hiring
  3. Lists a Gmail address – more:p:person-role:gmail “”
  4. Self-identified top skills example (lists the skills, not a job title) – more:p:person-role:django*python django python

Now, you can try your own searches – just use the search engine and follow the format.

Isn’t that cool?


(And, as some of you may have guessed, the above technique is at the foundation of our new tool Social List – which you should try if you haven’t! It offers a 48 hours free trial.)

X-Raying for NOT Job Hoppers

booleanstrings Boolean

Recruiters who place highly qualified full-time employees always scan resumes and profiles to see if the person is a “job-hopper”. Most employers assume we won’t be bringing people for interviews if they changed jobs too often in the past for no good reason. We do, too.

However – not too many search systems offer a chance to search for non-job-hoppers. LinkedIn is not an exception. Paid accounts can show the length of the current role and stay at the current company (potentially changing roles), but we can’t query the lengths of past jobs.

X-Raying LinkedIn is tricky, but we can search for any words on a public profile. The profiles have job lengths phrased as “xxx years yyy months“. We can take an advantage of that!

Here is an example search for people who stayed at least 3 years at each job: Example search. I am using a template: [ -year -“2 years” months “years” ].

Google numrange operator comes in handy in searches involving years of experience. Two periods (..) stands for numrange on Google:

  • 3..7 means any number between 3 and 7
  • 8.. means any number that is 8 or larger

Here is a search for job hoppers (not sure why someone will search for those, but someone has recently asked this question on one of Facebook Recruiter groups): OR -pub.dir “year” months -“2.. years”

Turning this search logic around – we can look for people who have a demonstrated job stability – their jobs lasted 3 years or more – but haven’t stayed at any job longer than 8 years:  Example.

I have used the template: [-year -“2 years” “3..7 years” -“8.. Years”].

Our presentation on Overcoming LinkedIn limitations was sold out. You can get a recording at the Training Library. We’ll repeat the webinar as soon as our schedule allows; stay tuned!

And here is a question for you: how would you X-Ray for people who do not have a current job? (Hint: it’s easy).




Hidden LinkedIn Interpretations

booleanstrings Boolean

LinkedIn’s Big Data puts the company in a unique position to create a system of organizations, job titles, skills, and the term relationships – which it used to have ambitious plans to do. I hope they will pick it up! But unfortunately, in the last few years, we are seeing somewhat weak and inconsistent attempts to figure out the data and provide intelligent, semantic search and browsing.

There are apparent LinkedIn limitations, such as:

  1. Commercial search limit for those with a free account – that is quite serious. (We know of a “hack” to overcome that, but it’s not available to everyone).
  2. Inability to search by a group membership and by a zip code and radius in premium accounts. (We know ways around that and will be teaching it shortly).

But I would say that the “worst” LinkedIn limitation, depriving us of matching search results or showing false positives, is its ongoing half-baked interpretation of our search terms.

If we search for vice president, should we expect LinkedIn to find VP and V.P. as titles? Let’s take a look at a few test searches.

The strange numbers of results above come from interpreting.

* a quote by the Python software language creator, Guido van Rossum.

Clumsy term interpretations that we are experiencing on LinkedIn happen because of the

“hidden limitations of underlying abstractions.”

that Guido is talking about. The software attempts to make sense of professional data and provide semantic search – at least a semantic “flavor.” But the interpretation is rarely obvious, has pretty much never been documented in LinkedIn’s Help, and the algorithms change a lot (they changed three times in the last month by my count – each time altering the search results for some queries).

To make its users even more confused, LinkedIn interprets our search terms differently, depending on the account – personal, Sales Navigator, or LinkedIn Recruiter. That results in mismatching numbers of search results across accounts. Sometimes, Recruiter gets more results (but not necessarily the results we want); at other times, the personal account (OR Boolean search) “wins.”

I do hope things will improve. In the meantime,

Changes to back-end algorithms affect all of us, while the changes are hidden from us.

Enough confusion! We’ll go over the hidden limits and straighten it out in the double-webinar “Overcoming LinkedIn Limitations” next Wednesday.




Programming Languages and IT Sourcing Pitfalls

booleanstrings Boolean

These are the top fifteen programming languages on Github, the top site where engineers collaborate on creating software. Scroll down on the advanced search dialog  and you will see the lo-o-o-o-ong choice of the languages, starting with the 24 most popular, then, listing “everything else”:

Github also offers to search for languages using the language: operator instead of the menus. You can type language:python in the Github search box. Some languages that you may have never heard of, exist on Github. For example, Github has a sizeable population writing in a language called Julia:

And here is where I want to warn you.

Pitfall #1

It seems that we can search for any language you like. But in reality, we can only search for standard languages on Github. To clarify, in this case, “standard” means that the language has to be in the drop-down menu in the advanced search dialog. You can search, for example, for language:HTML5 – and you will see no results because HTML5 is not a standard language name. No results may puzzle you. But a worse mistake is to search for a non-standard language along with a location. It such a case, Github will ignore your language: operator – and your results will not match what you want. Example: compare language:HTML location:”new york” and language:HTML5 location:”new york” – the latter search ignores the language: operator and just gives us everyone in “New York”.

Understandably, many of us make this mistake until we look closer. Because of this behavior, it may seem that we can search for a combination of languages, but…

Pitfall #2

Github “ORs” the languages we enter into search, i.e. it will look for everyone who writes in one language or another; here is an example. AND is not supported on Github. There is no way to search for members who write in two or more languages. You can do so in Social List, but not on Github.

This blog post on Lever about Recruiting Developers on GitHub has some good advice but it mistakenly assumes that we can search for language:”CSS AND HTML”. No, we can’t. It’s an honest mistake and is hard to catch because many results show up, but the results are not what you think!

As David Galley says, “In Sourcing, question everything.”

Don’t have the time to figure out all the search subtleties on Github and other channels to find Developers? Come join me for the fully-reworked webinar

“How to Find and Attract Technical Talent”.

Date: Wednesday, October 25

Time: noon Eastern (recordings are provided to all)

Since I used to be a”techie” (in a past life), I will add hints on sourcing and recruit “from the candidate’s side” to the training, derived from my own experience.  I look forward to sharing the material with you!





What Did The Machine Learn?

booleanstrings Boolean

Have you seen the heated Facebook discussions where our colleagues suggest the percentage of a Sourcer’s research work that will be soon automated – anywhere from 5% to 80%? Some say that we are in a dying profession. The future will show, but I am currently with the “5%” crowd. I do agree that some other jobs will change or go away as machines “replace” people. Some other types of jobs will be created too. But the Sourcer jobs and functions are not going away.

What is Machine Learning? Simply speaking, we have two types of “objects” – for example, job descriptions and resumes. We feed this type of info into the computer:

– Resume1 matches JobDesription1
– Resume2 matches JobDesription1
– Resume1 does not match JobDesription2
– Resume2 matches JobDesription2
– Resume3 does not match JobDesription2

Here Resume1, JobDesription1, etc. are just blobs of data (representing the content of resumes and job descriptions). Inside the computer, the data looks like this: 00110011100011000010… It’s hard to imagine that a human would learn anything from staring at strings of 1s and 0s. But research and real-life applications show that, in selected situations, having been “fed” enough data, the computer learns and can start performing matching on its own.

From testing a number of recruiting matching systems, I can say that we are currently far away from automatic matching resumes to jobs correctly. As part of a research project for a client, my partners and I  reviewed a sample of 100 resumes matched to several job descriptions by three leading software systems. Our study revealed that all three performed equally badly. In most “matching” cases we could guess a reason for matching (such as a keyword), but only about 3% of the matches sounded right. (Of course, we are picky, but still…)

There are good reasons though why ML-based systems are not matching resumes against jobs well (yet?).

One very simple reason, that I haven’t seen discussed much, is the difficulty of parsing the data in recruiting matching systems. People are bad at writing both job descriptions and resumes. (Know what I am saying?) The machine needs to do some heavy deciphering; it can use some other data, such as a dictionary with term synonyms, but the task is hard. It could be that it requires more data for machines to learn than most current matching systems have. (LinkedIn would be in a position to do matching, given the amount of data, but they are behind others). 

When working recruiting matching happens – in certain areas and industries first – we will be facing a new challenge. Many are worried about machines potentially learning discrimination and matching algorithms needing to be audited or combined with some “anti-bias” algorithms. But even more broadly, when an ML-based hiring works fine on its own, sometime in the future, how will we get some human understanding of the reasons for their decisions? As an article from MIT Technology Review says that we need “ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise, it will be hard to predict when failures might occur”.

There are interesting efforts to make machines “explain themselves” at DARPA. I copied the image above from a DARPA research paper, which I recommend reading if the subject interests you.

Making machines tell us “what” they learn is a fascinating research topic. It is also of practical importance for the future of those areas in our industry where we do apply automation. And, learning and controlling what automated systems do will continue to require our human presence.







Do Not Procrastinate – Refresh Your Recruitment Data

booleanstrings Boolean

Is Recruitment Data in your ATS (Applicant Tracking System) outdated? The answer is “yes” (or “yes, unfortunately”) for the vast majority of us. We also realize that updating the records would be beneficial, because:

  • People with whom we were in touch or who applied in the past (those in our ATS) are more likely to respond if we contact them
  • With the updated data, we will be finding more relevant results

It is also useful to populate, or “enrich”, our records with social profile links so that we have references to some (likely) up-to-date information anytime we access each record.

Yet there are always more urgent things to do for busy Recruiters than cleaning up the database; many of us keep going with outdated records. The outdated information slows us down, and refreshing is harder as the information gets outdated along with the systems that keep it. It is best to take care of updating your records sooner rather than later! We will cover the topic in detail in a webinar on data refreshing.

One type of tool to consider for massive updating is usually called bulk refreshing, primarily used in Email Marketing and Sales, and, I think, should be used in Recruiting more than it is. Tool examples include Clearbit, FullContact, Pipl, and These tools offer APIs for use by Developers; however, non-coders can access mass-refreshing simply via uploading and downloading contact files in Excel.

Which tool fits someone’s particular needs (and budget), requires investigation and some trial runs. But regardless, enrichment tools can be of big help to us even before we order to refresh a list. This is due to “batch previews”, where we get to see some information about our lists. (This is a sourcing hack by the way, right here). We at Sourcing Certification especially like Clearbit Batch Preview that shows some characteristics of your list, seen in the screenshot below, for free (at which point you can decide whether to pay).

I would like to invite you to learn about tools, methods, and pitfalls of Recruitment Data Refreshing by studying our 90-minute webinar – you can find it at




The Opposite Bug in LIR

booleanstrings Boolean


Two days after I published an astonishing discovery on the space ” ” providing extra results, LinkedIn Recruiter quietly changed its search algorithm – again! (Big thanks to several colleagues who tried the searches, no longer saw the same results as I had posted, and alerted me). Could be, LinkedIn fixed the LinkedIn Recruiter problem? After the change, both examples in the post “Spaced Out!” returned the same number of results.

Unfortunately, it is too early to celebrate. The new algorithm has brought in new bugs. Let’s look at one of them. In this example, an “object” search, i.e. a selection of a standard job title produces many more results than “Boolean”, i.e. plain keyword search. Compared to the way it was a few days ago, we can call it “the Opposite Bug”.

If you would like to reproduce this, search for: current title = Tax Specialist (selection or keywords), location = Greater New York Area, industry=Financial Services, and keywords=”corporate tax”.

If you think that the “object selection” type of search now does better because it produces more results, try to look at the results closer. Apparently now LIR includes what it considers to be synonyms to the standardized job titles. But they are not synonyms, are they? Here are just three examples of the job titles included what LinkedIn thinks are synonyms to “Tax Specialist”:

1) Senior Program Manager, Film Tax Credit Program; 2) Finance Intern – Tax; 3) VP Tax Reporting.

Not impressive.

Once again, searching by selecting a standard job title, by selection, produces the wrong results.

Conclusion: the algorithm has changed, but Boolean still wins. Don’t forget to end your searches with a space ” ” as a shortcut to “communicating” Boolean to Recruiter.