Course One: The Strip

Chapter 5: The Algorithm

I want to talk about how you found this book.

Not the physical object — the book, the paper, the thing in your hands or on your screen. I mean: how did you find the information that led you to the decision to read a restaurant guide to Las Vegas written by a person you've never heard of, published without the backing of a major house, with no celebrity endorsement, no television tie-in, no brand partnership, no algorithmic boost from any platform that has a financial interest in directing your attention? How did this book reach you?

I am asking because the answer matters, and because the answer is, in all probability, an anomaly. The systems that govern the flow of information about restaurants — the search engines, the review aggregators, the recommendation algorithms embedded in every map application and travel platform and hotel concierge kiosk — those systems did not send you here. They do not know this book exists. If they did, they would not recommend it, for reasons I am about to explain, and those reasons are the subject of this chapter, which is not a restaurant review but something closer to an autopsy — a methodical examination of a system that is not dead but that is killing something, and I did not come to Las Vegas to perform autopsies, but I am a data journalist before I am anything else, and when the data tells me something is wrong, I do not look away.

I look at the data. That is what I do. That is all I have ever done.

Let me go back to January.

In Chapter 2, I noted an anomaly: the first fifteen results when I searched "best restaurants Las Vegas" on three separate search engines, two recommendation platforms, and one hotel concierge app all included Gordon Ramsay Hell's Kitchen in their top five. All fifteen results. Across six platforms. I noted at the time that this level of uniformity was unusual — that algorithms are designed to produce variance, that organic consensus does not generate results this clean — and I filed the note and moved on, because I was reviewing a restaurant and I had a Wellington to eat and a methodology to follow.

I did not move on. Not really. The note stayed in my notebook, and the notebook stayed on my desk, and every time I opened it to write up a review, I saw the note, and every time I saw the note, the data journalist in me — the person I was before I was this person, the person who spent seven years identifying patterns in data sets for a living — that person said: Run the test again. Run it properly this time. You took a single sample across six platforms and found zero variance. That is not a note. That is a hypothesis. Test it.

I tested it in March, after finishing my Strip reviews, during a week I had set aside for what I euphemistically called "administrative work" — the organizing, cross-referencing, and fact-checking that makes the difference between a guide and a collection of opinions. I sat in my hotel room with a laptop, a notebook, a VPN, and the methodological discipline of someone who has designed and executed research protocols for publications that required their data to survive peer review. I was not guessing. I was testing.

Here is the test I ran:

I created five user profiles across the major platforms — Google, Yelp, TripAdvisor, OpenTable, and the concierge app used by three of the largest casino-hotel properties on the Strip. Each profile had a different apparent demographic: age, stated dining preferences, price range, cuisine interests. I varied the location signals — one profile searched from a Strip hotel, one from downtown, one from a Henderson address, one with location services disabled, one from a VPN exit node in Phoenix. I then ran the same basic query on each platform, from each profile: "best restaurants Las Vegas." Thirty queries in total. Five platforms, five profiles, one question.

I want to be clear about what I expected to find. I expected variance. I expected the profiles with different stated preferences to receive different recommendations. I expected the location-variable profiles to receive location-weighted results — the downtown profile should see more Fremont Street restaurants, the Henderson profile should see more suburban options, the Phoenix profile should see generic tourist recommendations. I expected the platforms to disagree with each other, because they use different algorithms, different data sources, different weighting schemes. I expected, in thirty queries, to see thirty meaningfully different sets of results, because that is how search and recommendation systems are designed to work — they personalize, they differentiate, they serve each user a version of reality calibrated to that user's apparent preferences and behaviors.

This is not what I found.

I found convergence.

Not perfect convergence — the results were not identical across all thirty queries, which would have been too clean, too obvious, the kind of result that indicates a bug rather than a feature. What I found was subtler and, to someone trained to read data, more disturbing: a convergence pattern. Across all thirty queries, regardless of profile, location, or platform, the same five restaurants appeared in the top ten results with a frequency that my statistical training tells me is not organic. Hell's Kitchen. A celebrity steakhouse at the Bellagio. A high-end sushi restaurant at the Wynn. A Italian concept at the Venetian. A seafood restaurant at the Aria. Five restaurants. Always present. Always in the top ten. Sometimes in different positions — the algorithms had enough flexibility to shuffle the deck — but the same five cards were always in the hand.

Five. I noticed that it was five. I did not, at the time, attach any significance to the number. I was counting results, not counting fives.

The convergence extended beyond the top ten. Of the thirty queries, twenty-six produced results drawn from a pool of approximately forty restaurants — all located on the Strip or in immediately adjacent casino properties. All operated by major hospitality groups. All with review counts in the thousands or tens of thousands. The remaining four queries — the ones run from the downtown location signal and the Henderson address — included a small number of off-Strip restaurants in their results, but even these were drawn from a limited pool, and the pool had characteristics I recognized from my years in data journalism: the restaurants in the pool all had optimized web presences, standardized metadata, and the specific technical profile of businesses that have invested in search engine optimization. They were not the best restaurants. They were the most findable restaurants, which is a different thing, and the difference is the gap between quality and visibility, and that gap is where the interesting problems live.

I want to be precise about what I am claiming and what I am not claiming. I am not claiming that a conspiracy exists. I am not claiming that someone, somewhere, is manually adjusting search results to direct tourists toward specific restaurants. I am claiming something more mundane and more troubling: that the recommendation systems used by the major search and travel platforms produce, whether by design or by emergent behavior, a remarkably uniform set of results that systematically favor large, branded, high-volume restaurants over small, independent, low-volume ones, and that this uniformity is not a bug in the system but a feature of the system's architecture — a natural consequence of algorithms that weight review volume, booking frequency, digital presence, and advertising spend over the qualities that actually make a restaurant worth visiting, which include things like whether the chef is in the kitchen, whether the menu changes with the seasons, whether the server has worked there long enough to know your name, and whether the food is any good.

The algorithms do not measure whether the food is good. They measure whether people say the food is good, which is a different measurement, and the difference is the same gap — quality versus visibility — and the gap is getting wider, and the restaurants that fall into the gap are the ones you will not find unless someone tells you about them, or unless you are the kind of person who walks until you find something, which fewer people are, because the algorithm is always there, on the phone in your pocket, ready to tell you where to eat, and it is easier to follow the algorithm than to walk, and the algorithm knows this, because the algorithm was designed by people who know this, and the design is not malicious but it is not neutral either, and the result is a city where six thousand restaurants exist and the platforms reliably surface forty of them.

Forty out of six thousand. I calculated the percentage. It is 0.67 percent. The recommendation systems that most visitors to Las Vegas rely on to find restaurants are surfacing less than one percent of the available options.

I stared at this number for a long time. Then I built a map.

The map was not the paper map on my desk — the one with the restaurant pins, the one I've been building since January, the one where I sometimes think I see shapes that aren't there. This was a digital map, built in a GIS application I'd used in my data journalism work, and it showed the geographic distribution of the forty restaurants that appeared consistently in the recommendation results, plotted against the geographic distribution of all six thousand-plus restaurants in the Las Vegas metropolitan area.

The forty clustered on the Strip. This was not surprising — the Strip is where the branded restaurants are, where the review volume is, where the advertising budget is. What was surprising was the negative space. The places the algorithm did not send people. The areas that, from the algorithm's perspective, did not appear to contain restaurants at all.

I have started calling these areas ghost zones. I do not love the term — it has a paranormal connotation I do not intend — but it is descriptively accurate. A ghost zone is an area with a high density of restaurants that the recommendation algorithms treat as if it is empty. It is not empty. It is invisible, which is worse than empty, because an empty space on a map tells the viewer there is nothing here, and the viewer can decide whether to believe the map or go look for themselves. An invisible space tells the viewer nothing, because the viewer does not know the space exists. You cannot decide to ignore a recommendation you were never given. You cannot choose the road not taken if the road is not on the map.

The ghost zones, on my map, were as follows:

Spring Mountain Road — the four-mile Chinatown corridor with approximately 150 restaurants, the densest concentration of independent dining in the city. The algorithms surfaced three of them. Three out of 150. The other 147 were ghosts.

Fremont East — the six-block entertainment district east of the Fremont Street Experience canopy. A dozen bars and restaurants, several of them among the most historically significant establishments in Las Vegas. The algorithms surfaced one. A pizza chain.

The Arts District — eighteen blocks of galleries, restaurants, bars, and performance spaces between the Strip and downtown. The algorithms surfaced zero.

East Sahara / East Charleston corridors — residential and commercial corridors with decades of independent restaurants serving the city's non-tourist population. The algorithms surfaced zero.

Henderson / Summerlin / North Las Vegas — the suburbs, collectively home to more than a million people who eat food every day. The algorithms surfaced a handful of chain restaurants with multiple locations. No independents.

I am looking at this map right now, as I write. The ghost zones form a ring around the Strip. The algorithm has drawn a perimeter — not deliberately, not with intent, but with the cumulative effect of a system that equates visibility with value — and inside the perimeter is the 0.67 percent. Outside the perimeter is everything else. The six thousand. The restaurants where chefs cook food because cooking food is what they do, not because a television network offered them a building on the Strip. The restaurants where the server has been there for twenty-three years and calls you "hon" and does not know what SEO stands for and does not need to, because she was pouring coffee before the internet existed and she will be pouring coffee after the algorithm has been replaced by whatever comes next.

Those restaurants are invisible. Not closed. Not failing. Invisible. The map says they are not there, and the map is what most people look at when they decide where to eat, and the people who make the map have no obligation to include restaurants that do not meet the map's criteria for inclusion, and the criteria are not secret — they are volume, visibility, and the digital footprint that only well-funded operations can afford to maintain — and the result is a system that is not corrupt, exactly, but that produces, with the quiet efficiency of a machine doing exactly what it was designed to do, a city in which most of the best food is hidden from most of the people who would eat it.

I want to talk about incentive structures, because the people who build these systems are not villains. I have worked with people who build recommendation systems. I have been, in a previous life, a person who builds recommendation systems — or rather, a person who builds the data infrastructure that recommendation systems consume, which is a distinction with a difference, but the difference is smaller than I once told myself it was. The engineers and product managers who design the algorithms that surface forty restaurants out of six thousand are solving an optimization problem, and the optimization problem is real: a tourist arrives in Las Vegas with limited time and unlimited options and needs help deciding where to eat. The algorithm's job is to reduce the decision space. It does this by identifying the options most likely to produce satisfaction — measured by review sentiment, booking rate, return-visit frequency, and the absence of complaints — and presenting those options first.

This is a reasonable approach to a real problem, and the engineers are not wrong that most tourists, most of the time, will be satisfied by the options the algorithm provides. Hell's Kitchen is good. I said so in Chapter 2. The steak at the Bellagio is good. The sushi at the Wynn is good. The forty restaurants are not bad restaurants. They are the restaurants most likely to produce a satisfactory experience for the widest possible range of diners, and if satisfaction is the metric, the system works.

But satisfaction is a low bar. Satisfaction is the absence of complaint. Satisfaction is "the food arrived, it was food, I ate it, I did not get sick, the check was accurate." Satisfaction is two stars on my rating system — competent but forgettable. The algorithm optimizes for the floor, not the ceiling. It optimizes for the experience least likely to produce a negative review, which is not the same as the experience most likely to produce a transcendent one, because transcendence requires risk — the risk of a dish you've never heard of, a cuisine you can't pronounce, a strip mall on a road the app doesn't know about, a server who doesn't speak English but who will, through the universal language of bringing you food and watching your face, determine what you need and provide it. Transcendence requires the possibility of failure, and the algorithm's entire purpose is to eliminate the possibility of failure, and what it eliminates along with the failure is the surprise, and the surprise is where the four-star and five-star experiences live.

I am aware that I sound like a person with a grievance against technology, and I want to correct that impression. I do not have a grievance against technology. I have a grievance against the specific misapplication of technology that produces a map of Las Vegas containing less than one percent of its restaurants and presents that map as comprehensive. I have a grievance against any system that takes six thousand options and returns forty and does not tell the user that it has discarded 5,960 alternatives. I have a grievance against invisibility imposed by architecture rather than by choice.

And I have a professional interest — not yet a personal one, not yet — in the ghost zones. Because the ghost zones are where the interesting food is. The ghost zones are where the restaurants I haven't reviewed yet are waiting, the restaurants that will require me to leave the Strip and drive or walk into parts of Las Vegas that the algorithm says do not exist, and I am going to go there, because my methodology requires comprehensive coverage and because my curiosity requires answers and because I built a map that shows me a city that is 99.33 percent invisible, and I cannot write a guide to a city I cannot see.

There is one more thing I want to note before I leave the Strip, and I want to note it carefully, because it is the observation that moves this chapter from data journalism into something I do not yet have a category for.

In the course of running my thirty queries, I noticed that the convergence — the tendency of all platforms to surface the same forty restaurants — was not uniform across time. The queries I ran during business hours (9 AM to 5 PM) produced slightly more variance than the queries I ran in the evening (6 PM to midnight). The evening queries were tighter — the same restaurants in nearly the same order, the recommendations more insistent, the ghost zones more absolute. It was as if the system had a schedule, a rhythm, a pattern of behavior that varied with the clock. During the day, the algorithm relaxed. At night, when the tourists were choosing where to eat, the algorithm focused.

This is explicable. It is likely a function of real-time data weighting — at peak dining hours, the algorithm gives more weight to current booking patterns and trending searches, which are dominated by the same high-volume restaurants, which produces tighter convergence. During off-peak hours, the algorithm gives more weight to baseline signals — reviews, ratings, proximity — which produces slightly more variance. It is an optimization behavior. It is predictable. It is the algorithm doing what algorithms do, which is respond to inputs, and the inputs at 8 PM on a Friday in Las Vegas are dominated by tourists opening their phones and searching for a place to eat, and the algorithm gives them what the algorithm thinks they want, which is what the algorithm gave the last thousand tourists who asked the same question, which is the forty restaurants, which is the 0.67 percent, which is the map with the ghost zones, which is the city made invisible by the act of being measured.

The city made invisible by the act of being measured.

I wrote that sentence in my notebook at 2 AM on a Thursday in March, after eight hours of running queries and building maps, and I stopped writing and stared at it, because it described something that I recognized not from the Las Vegas restaurant scene but from my own career — from data journalism, where the first lesson and the last lesson are the same lesson, which is that the act of measurement changes the thing being measured, and the tool you use to observe a system becomes part of the system, and there is no view from nowhere, and the data is never raw, it is always cooked, and the question is not whether the data has been manipulated but by whom and to what end and whether the manipulation is visible to the person reading the results.

I knew this. I have always known this. It is the foundational principle of responsible data work, and I built a career on it, and I left that career because I was tired of watching the principle be ignored by the people who employed me, and I came to food criticism because I believed — I still believe — that a restaurant review is one of the last forms of criticism that requires the critic to be physically present, to taste the food, to sit in the room, to experience the thing and not its data shadow.

And now I am sitting in a hotel room, running queries, building maps, staring at data, and the data is showing me that the city I came here to document is being reshaped by a system that makes most of it invisible, and the system is not evil and the people who built it are not villains and the tourists who use it are not fools, and the whole thing is working exactly as designed, and the design is the problem.

I do not yet know what to call the system. I have been thinking of it as "the algorithm," but that is imprecise — it is not one algorithm but many, running on many platforms, optimized by many teams, producing a convergent result that none of them individually intended but all of them collectively created. It is an emergent phenomenon. It has no name, no office, no CEO, no mission statement. It is simply the cumulative effect of optimization applied to dining, and the effect is a city where forty restaurants are visible and six thousand are ghosts.

In my notebook, I wrote: Find a name for it. I have not yet found one. But I have found its fingerprint — the convergence pattern, the ghost zones, the 0.67 percent — and a fingerprint is enough to begin an investigation, and I have begun one, and the investigation is taking me off the Strip.

A word about what I'm leaving behind.

I have spent five chapters on the Strip. I have reviewed a rotating steakhouse eight hundred feet above the desert, a celebrity chef franchise inside a fake Rome, a fifty-two-year-old diner that should not exist, and a sixty-seven-year-old steakhouse with a back door and a disputed founding date. I have eaten well. I have spent too much money. I have given four-star reviews to restaurants that deserved four stars and a three-star review to a restaurant that deserved three stars, and I have been honest in every rating, and my methodology has worked — has mostly worked — has worked in every case except one, and I am choosing not to dwell on the one case, because I am a professional and I do not dwell on anomalies until I have enough data to determine whether they are signal or noise.

I have also, in these five chapters, accumulated a collection of observations that do not fit neatly into any review. Table numbers that sum to five. A carpet pattern on the 106th floor. A server who predicted my wine preference with unsettling accuracy. A diner that defies economic modeling. A steakhouse with a history that operates on multiple levels, not all of them visible. A search ecosystem that makes 99.33 percent of the city invisible to the people who visit it. And a map — my map, the paper one, the one with the pins — that sometimes, when I stand back and look at it in the late-night light of my hotel room, seems to contain the suggestion of a shape I cannot quite resolve.

These observations are, individually, nothing. They are the noise that any thorough observer accumulates over three months of intensive fieldwork. The table numbers are coincidence. The carpet is decorative. The server was good at his job. The diner has a favorable lease. The steakhouse has a colorful history. The search ecosystem is doing what search ecosystems do. The map contains pins, not patterns.

I know this. My training says this. My methodology, which has served me reliably for two decades, says: discard the noise, keep the signal, do not chase patterns that your own analysis identifies as pareidolia.

But my training also says: when you have a collection of anomalies that individually mean nothing, check whether they collectively mean something. Run the correlation. Look for the thread. Because sometimes the noise is not noise. Sometimes the noise is a signal operating at a frequency your instruments were not designed to detect, and you discover this not by trusting your instruments but by trusting the feeling — the specific, professional, hard-won feeling of a data analyst who has been doing this long enough to know when the data is trying to tell them something they don't have a framework for yet.

I do not trust feelings. I have said this, in this guide and in my career, many times. I trust data, methodology, verifiable claims. I trust the things I can measure.

But I cannot measure the ghost zones from the Strip. I cannot test my hypothesis about the invisible city from inside the visible one. The next phase of this guide requires me to leave the 0.67 percent and enter the 99.33, and the first stop is downtown — Fremont East, the Arts District, the Charleston corridor — the oldest part of Las Vegas, the part that existed before the Strip, the part that the algorithm says is not there.

I am going to go look.

Practical Information

This chapter does not contain a restaurant review, and I apologize to readers who purchased this guide for restaurant reviews and have instead received a data journalism exercise. I offer, by way of compensation, the following practical information, which is derived from the research described above and which I believe is more valuable than any single restaurant recommendation I could provide:

How to find restaurants the algorithm won't show you: Turn off your phone. I am serious. Walk. Pick a direction that is not the Strip. Walk until you see a restaurant that you have never heard of, that has no line out the door, that has a sign in a language you may not read, that is located in a strip mall between a nail salon and a tax preparer. Go in. Sit down. Order something the person at the next table is eating. This method has a failure rate. The failure rate is the price of discovery. The failure rate is also, in my experience, significantly lower than you fear, because a restaurant that has survived in a strip mall without algorithmic support has survived on the quality of its food and the loyalty of its customers, and both of those things are better indicators of a good meal than a four-point-seven rating based on nine thousand reviews that all say "great vibes."

The ghost zones: Spring Mountain Road, from Valley View to Decatur. Fremont East, from Las Vegas Boulevard to Fifteenth Street. The Arts District, bounded roughly by Charleston, the I-15, Sahara, and Main. East Sahara Avenue, from Maryland Parkway east. These are the areas where the restaurants live that the platforms do not show you. I will be reviewing restaurants in all of these areas in the chapters that follow. If you cannot wait, go now. You will not be disappointed. You may be confused, overwhelmed, linguistically challenged, seated at a table with a sticky menu and a fluorescent light and a dish you cannot identify that turns out to be the best thing you have eaten in a year. This is the trade. The algorithm offers certainty and delivers adequacy. The ghost zone offers nothing and delivers everything.

One more thing: I said earlier that I did not yet have a name for the system that produces the convergence pattern. I have been looking. In the course of my research, I have found references — oblique, always oblique, never direct — in industry publications and hospitality trade journals to a platform, or a suite of platforms, or a consulting framework, that provides "customer experience optimization" services to major casino-hotel conglomerates. The references do not use a consistent name. One article calls it "a leading CX optimization partner." Another refers to "integrated hospitality analytics." A third uses an acronym I cannot verify, in a context that suggests the author was being deliberately vague.

I am not going to speculate about what this system is called or who operates it or how it works. I am going to keep looking. The fingerprint exists. The ghost zones are real. The convergence is measurable. Somewhere behind all of it is an architecture — a design, a system, an intention — and I will find it, because finding things in data is what I was trained to do, and I have never stopped doing it, and the only difference between my old career and my new one is that the data set is now a city and the unit of analysis is a meal.

I closed my laptop. I looked at the paper map. The pins were just pins. The shape I sometimes thought I saw was not there, or was there and I could not resolve it, or was there and I was not ready to see it.

I was not ready for a lot of things, in March. I was ready for the Strip. The Strip was measurable. The Strip was my methodology working as designed, mostly, with one exception I was choosing not to dwell on.

The ghost zones would be different. I could feel that already, and I did not trust the feeling, and I went anyway, because the data said there was something there that the map was not showing me, and I have always followed the data, even when — especially when — the data leads somewhere the map says is empty.

The map says downtown is empty. The map is wrong. I have been to Fremont Street. I have looked at the canopy and the neon and the old casinos and the empty lots and the pawn shops. I have also looked east, past the canopy, where the sky reappears and the buildings get lower and the signs get older and the city remembers what it was before anyone decided to optimize it.

That is where I am going next. The oldest bar in the city is there, and it has the first liquor license ever issued in Clark County, and the number on the license is 00001, and the floor contains a sealed safe from 1950, and none of this is in the algorithm's results, and all of it is true.

I am going to walk east.