From Noise to Signal— How to Avoid Getting Buried in Online Clickbait with Alex Fink
“The mission is to improve the quality of information people consume.” —Alex Fink
As the internet has expanded, so too has the sheer volume of information available at our fingertips. However, many creators prioritize sensationalism and virality over factual reporting and thoughtful analysis. This has led to the rise of clickbait headlines, misleading summaries, and opinion pieces masquerading as objective news. As artificial intelligence further automates content generation, the problem of digital junk threatens to spiral even more out of control. Addressing this challenge will be key to ensuring people have access to information that truly informs and enriches their lives in the digital age.
Hence, this week’s episode features thought-provoking perspectives on improving information quality and shaping technology for the better with AI pioneer, Alex Fink. Alex is the founder of OtherWeb, a platform focused on improving online content quality that has grown to over 7 million users under his leadership as CEO. With over 15 years of experience in engineering and executive roles, Alex is dedicated to advancing AI and addressing the problem of low-quality online information.
Listen in as JP and Alex dive deep into the evolution of online content quality, challenges in determining truth and addressing biases in the media, AI’s potential impact on various professions in the coming years, insights into leading AI language models, Tesla’s approach to self-driving, and the intersection of AI and blockchain.
Episode Highlights:
- 01:16 The Evolution of Online Content Quality
- 08:16 Fact-Checking News Media
- 13:07 AI’s impact on Various Professions
- 18:23AI Language Models and Their Future Prospects
- 22:44 Tesla’s Approach to Self-Driving Cars
- 26:40 AI’s Impact on Blockchain
- 32:25 Solving the Problem with Digital Junk
Resources:
Get Your Copy of JP’s Book
Quotes:
- 03:18 “The more content that’s being produced, trying to determine what is actually worthy of consuming is a real difficult task.” —JP McAvoy
- 06:21 “Quality is dependent on your goals, essentially.” —Alex Fink
- 12:08 “The best way for somebody today to be informed is to consume the best version of the right and the best version of the left.” —Alex Fink
- 18:49 “The pioneer doesn’t always win. The last person that is big enough to the market typically takes the market.” —Alex Fink
- 20:18 “We can progress further without changing the paradigm.” —Alex Fink
- 29:55 “The mission is to improve the quality of information people consume.” —Alex Fink
A Little Bit About Alex:
Alex Fink is an AI pioneer with over 15 years of experience in engineering and executive roles in the technology industry. Alex spent 10 years in Silicon Valley working on computer vision and camera technologies. In 2016, he moved to Austin, Texas where he continues his work in AI.
Currently, Alex is the CEO and founder of OtherWeb, an online platform dedicated to improving the quality of information consumed on the internet. Under his leadership, Other Web has grown to over 7 million users. Alex’ goal is to address the growing problem of “digital junk” and low-quality online content by developing new tools and methods to better evaluate and discuss news and information.
TRANSCRIPTION:
JP McAvoy: Hi, and thanks for joining us on today’s show. We’ve got Alex Fink who’s an AI pioneer, a Silicon Valley vet. He’s the CEO of the Otherweb. He spent 15 years in various engineering and executive roles in tech. And he’s dedicated himself to solving the biggest problem on the internet today, which is digital junk. Here’s a conversation with Alex.
Hi, Alex, thanks for joining us here today, I guess from Texas, right? What part of Texas are you in?
Alex Fink: Austin, Texas. Hi, Jay.
JP McAvoy: In Austin. Good to see you. Thanks for joining us. We’re just speaking a moment ago, we’re both Silicon Valley. How long have you been in Austin?
Alex Fink: Past six years. And before that, I was in Silicon Valley for 10 years.
JP McAvoy: 10 years ago, roughly six years ago. I guess I was just before that when I’d left as well. So why Austin? I guess a lot of people have gone to Austin.
Alex Fink: We didn’t really have a good reason other than at some point, we decided that we want to leave California. We just scheduled a different weekend in every city in North America, and Austin ended up being number one just based on our impression, even though we visited it in July. So we saw the worst of it, and still decided to come.
JP McAvoy: And maybe a little ahead of the curve at the time as well, six years. How long have you been working on AI?
Alex Fink: In the current space, which is natural language processing for about two and a half years. Before that, I did quite a bit of work on computer vision, which is technically AI. But I was closer to the hardware side, more on the camera side of things.
JP McAvoy: Can you talk through the progression of that? How does one lead to the other?
Alex Fink: There was a natural progression other than some of the skills might be transferable. But essentially, I got accidentally pigeonholed into perception systems, imaging cameras, computer vision, that sort of thing, from 2007 onwards. And so I did 15 years in the ad space. And at some point, I just had this crisis of conscience where it seemed to me like the world doesn’t need more cameras, and I’m not really improving it by building more of these things. I ran my own consulting company. So over the previous seven years, I shipped more than 35 different camera based products. So you could say that there’s a lot of cameras in the world, thanks to me, or because of me. I don’t know if that’s something to be thankful for. I decided, what is the biggest problem I can try to fix in the world that I would actually be proud of fixing? And it seemed like it’s the quality of information people are consuming. For some reason, even though we’re producing all this amazing information. At the end of the day, people consume junk. And so I wanted to try to fix that, or to at least ameliorate that to some extent.
JP McAvoy: Yeah. And let’s drill down on that. Because it’s so true, the more content that’s being produced, it seems like so much more of it is just crap. Trying to synthesize, you’re trying to determine what is actually worthy of consuming is a real difficult task. And that’s one thing you’re obviously trying to solve.
Alex Fink: And it’s becoming worse and worse, and it will become even worse now. Because with generative AI, content creation is becoming essentially free. So the cost of creating content, at least that content is going down. The way to monetize content is not really changing in any significant way. And so what we’re going to see is more and more about content chasing the same number of dollars.
JP McAvoy: When we say bad content chasing dollars, or I guess that’s part of the problem, right? It’s the chasing dollars. People are not really actually, people use it too generally. But generally speaking, people aren’t concerned with creating quality content. It’s just a mass produced mark for what it appears to be, or for what we see so often. Why do you think that is?
Alex Fink: I think they are concerned, or at least some of them are, but it doesn’t pay. If you look at how content gets monetized over the past one year, then most of it gets monetized through advertising. Advertisers pay per click or per view. There’s no paper quality or paper truth. And so if those are the only selective pressures on this population of content, you’re going to see evolution under the selective pressure to maximize this trait. And all the other traits get left behind. The evolution happens at their expense. And so I think for the past one, two years since Google acquired AdSense in 2003, we’ve seen this constant drift of everything towards Clickbait. It’s not like only the bad outlets shift and the good ones. They were everybody shifts so you still see this difference between the great outlets and the bad ones. But they’re all to the left if you’re thinking of where the bell curve is shifting or where they were previously. This is why you open CNN, occasionally you see an article with a title like, stop what you’re doing and watch this elephant play with the bubbles. That actually happened. This is why if I go to Google right now and search for the best protein powder, the number two result is from Forbes that actually has an article ranking protein powders. Why? Because they get clicks. There’s no other reason. I’m pretty sure that the person who wrote it did not actually take any of those protein powders, or check any of the Consumer Reports. They just copy-pasted them from somewhere as Chat GPT, I don’t know. But they’re just generating this level of content, even though they are Forbes. I would expect them to try to protect their brand name. Getting clicks is more important right now.
JP McAvoy: It really is just getting the eyeballs, and it becomes frustrating trying to consume. So it’s interesting, as you talk about the quality of it. What defines the quality of them? What is the quality of this these days?
Alex Fink: So it really depends on what you’re trying to get. Quality is dependent on your goals. Essentially, if you’re trying to read news that will inform you in some ways, then quality is more information with less emotional baggage. Lack of quality is more emotional baggage with less information. It’s not an objectively scientifically defined trait. It is just how most people would actually select what they view as higher quality or lower quality. Obviously, if you’re looking to get entertained, then elephants blowing bubbles are just fine. That is high quality entertainment. If you’re looking to just click on some random protein powder, maybe that top 10 list from Forbes is good for you. Though, again, I would think that at least going to someplace that did an actual review would be higher quality in terms of product selection than an article that copy pasted the marketing description of 10 products randomly and inserted affiliate links in there. So it depends, essentially, our first focus was on news. And in news, the best way we could come up with to define quality is to start from all the negative traits that everybody can agree on are negative. And that generally, everybody when they see this trade, they can agree that this is it. This is a good example. And then we train small models to detect each one. So Clickbait headlines, small trade, well defined, most people will agree when they see one. Therefore, we built the model to detect it, or something like a subjective language with a lot of adjectives. Generally speaking, you have a lot of adjectives and adverbs, and not that many proper nouns. You are speaking more subjectively, you’re trying to describe things in a more colorful way. So we can detect that. We can give some numeric scores of what that is. You can decide for yourself, whether that is what you’re looking for or not. Chances are, if you’re trying to get as much information in as few words as possible, the adjectives are useless to you. You don’t need to know that somebody passed a draconian law. You just need to know that they passed the law. The draconian part is meant to convince you, it’s not meant to inform you.
JP McAvoy: Yeah, it’s interesting. What we’re seeking many times is what we consider to be the truth or reality, if you will. I throw these words out of truth. I guess part of the issue is, who defines those? Who determines those? So I’ve got a couple clients that are working on things that are trying to assess their impact on the truthfulness of statements or reporting. How do you get around that? Or how do you address that issue?
Alex Fink: There is the general answer, which is determining truth is something that happens. Basically, through a social process, we have to filter all the truth upward, essentially. And you can find any one person that can look at any set of facts and determine whether it’s true or not among other things. Because sometimes, that information is not available in real time. Like you say, there’s a line in your kitchen. Unless I can break into your house and go into your kitchen right now, I can’t verify that. So the best way that I can determine the truth is to come up with some process of how I’m going to test your claim, what evidence I would require from you, and etc. Now, if you make a more general claim, like vitamin C improves cold symptoms, I can’t determine whether that’s true or not with any set of experiments that I myself do. It will just be insufficient. So for the most part, we have many scientists doing various experiments. Then we have other scientists trying to disprove the findings of the previous experiments. You have journals that are peer reviewed with people basically testing each other’s claims and challenging each other’s claims. And somehow after 20 years, we get a general idea that to the best of our knowledge, yes, it helps with symptoms. No, it doesn’t help with frequency, or duration, or things like that. So that’s how we arrive at something that appears to be the best truth that we know right now. Now, when you have fact checking in the news, they encountered the same problem. Fact checking is trying to essentially short circuit this entire process by setting one arbiter that has to determine whether something is true. Right now, they don’t have time to go through the process. They don’t have time to ask for evidence and wait for a response. They just have to give you a verdict right away, and that’s why it’s usually not that trustworthy. So even if it’s the best they can do and there is no underhanded motive, it’s really hard to trust them just because you can see that the arguments they use to verify or disprove something are not what we’re using for truth verification standards, from the scientific community.
JP McAvoy: Yeah, interesting. You’re saying from the standards of the scientific community, I think that as the common reader, we’re constantly considering the source. If you look at where it’s coming from, you probably understand that there’s a certain bias. Politics are the most divided we’ve ever seen. And you can look at two different news sources covering the same washing, they don’t cover the same events anymore. You look at one news source which is leaning whichever way, and they cover one set of events. You look at the others. On the other side, we’re discussing American politics. On the other side, it’s like there’s two whole different worlds that are now occurring in reality depending on whose reality or from what vantage point you’re looking at. Do you try to address that anyway? Or how do you try to capture that?
Alex Fink: So I think the best way that I can suggest people address it, and that’s what we’re trying to do on our platform is actually sample from both, but give you the best version of both. Because in each of those two worlds that you described, there are relatively high quality sources. And there’s basically just yellow press. And I think that the best way for somebody today to be informed is to consume the best version of the right and the best version of the left. Or even if you ignore right and left for a second, there are other axes that we see in the news right now that aren’t really correlated with right versus left. Let’s say pro Israeli or pro Palestinian, I don’t think that you can get a clearer image of the conflict as it’s unfolding. Now, unless you’re consuming Jerusalem Post and Al Jazeera, or at least let’s say the Western press, unless you’re consuming Wall Street Journal and BBC, because BBC at this point is pretty close to Al Jazeera. Wall Street Journal sounds a lot like the Jerusalem Post these days. You need both. And where they overlap, that’s probably true. And where they disagree, that’s probably where you should put in more time to research something. Maybe one of them is right, maybe they’re both lying. Maybe they’re both showing you one side of the elephant. But one is the rear right leg, one is the trunk. And to assemble the full picture, you have to read all sides.
JP McAvoy: It really is difficult now. I tried to do as you’re describing, and it’s really difficult to pick a sufficient number of vantage points and filter through, as you say, and then try to read it in an unbiased way. It really is one of the challenges. So it’s great that you’re looking to resolve those things, or the Otherweb is doing just that. How much do you think AI, generally, how much it’s going to impact things in the next five years? How much is the world going to change as the rate of AI adoption continues to increase? How different are things going to look in 2029, 2030 compared to the way they are right now?
Alex Fink: I would say it depends on your profession. In some professions, it will be a really profound change. And in some professions, you will hardly feel it. If you’re a plumber, you’re probably not going to see AI competing with you or helping you in any way, except maybe schedule your appointments. If you are a journalist, then you’re already seeing what’s going on. We just saw at least three distinct rounds of layoffs in the last week. Because like we mentioned, when you and I spoke before the show, content generation is becoming much cheaper. And content generation is what journalists do, even though they really don’t like the word content because it seems like the lowest common denominator to them. But it is a form of content. Maybe journalists are really good at it compared to the average person. But AI is getting better and better. And so it’s competing with them from the bottom and essentially pulling the price downward of the product that they are creating. So in the next five years, I wouldn’t be surprised if the number of journalists that are actually able to make a good living is substantially lower. Probably less than half. And then in the long run, it might be even less so than other professions. I’m not sure. You’re in law, I think some other parts of law will also be automated away. Or at least expedited to the point where one guy can do the work of 10 people, and then some parts of law will probably not be affected at all. I don’t think a trial lawyer is going to be either replaced or augmented by AI anytime soon because he needs to go there and convince people. People do not get convinced when a robot makes an argument, so he still has a job. But somebody who’s just, let’s say, redlining contracts of the same form over and over again, I know there are lawyers that do most of what they do. They get an NBA, change the three years to five years. Okay, that can probably be automated away. So either they find something new to do in law, or they don’t practice law anymore.
JP McAvoy: It’s gonna change that way. You say the plumber will continue the trades, and there’s a huge demand for people in the trades. We’re encouraging people to do the trades because there’s gonna be a real shortfall of people that way. And you say other positions are going to be replaced entirely. I was just saying this to my son, this is an interesting conversation. We’re talking about universal income. And this concept of, if everybody had a certain base set amount, are people actually going to work? I think it’s the same thing with respect to AI. If a lot of the tasks are being done by AI, what’s going to provide incentive to people to actually do the work in the future?
Alex Fink: I think the most realistic answer is that we come up with new tasks. Once these tasks are taken care of, then generally speaking, humans can come up with new things to do that are also productive and also pay. So I am a bit skeptical about the prospects of universal basic income among other things, because I was born in a country that had universal basic employment. Essentially, we’re a factory that requires thousands of people to actually function. Employ 300,000 people and it functioned in the exact same way and produced the exact same thing. So it’s not that different from Universal Basic Income once you think about it. Except people did go to work and back. But what they did there, there were quite a few people that didn’t quite know. And that country is the Soviet Union because it wasn’t quite right. So I don’t know that that is the direction it might be. It’s hard for me to predict the future. That’s not my business. I’m not in the prediction business. But I think, at least in the near future, you asked me about the next five years, there will be many tasks that are necessary to get AI to progress to where we need to progress. Even basic things like data annotation, even basic things like if OpenAI makes the next version of their model, they need some people to provide the human feedback for reinforcement learning. So to look at two versions of the answer and say which one looks better, that is thousands of people, even for a single model, those numbers are going to grow and grow. And so I don’t know if that is the long term prospects of what humans want to do. But at least in the short term, I don’t see us having a labor shortage or not enough jobs. I see us having a labor shortage.
JP McAvoy: Interesting. You see, it’s still that way because a lot of people are talking or fearful of the replacement that will occur, which is why you get in these conversations among other things, universal income or universal employment. I like the way that you turn it around to coloriage. Could you talk about what the reality of it actually could look like? You mentioned Chat GPT or OpenAI, what do you think of the model? Do you think that’s the leading model in that space? Do you think OpenAI wins that race? Who else steps up there or some of the needs that are being created there?
Alex Fink: For now, there’s about six different base models that I can think of.. But the three that we see most often in the US are basically Chat GPT, Claude and Bard. So those are the three leading horses maybe. And then there are some others that we just don’t see, because they’re in different countries or operate mainly in different languages. I don’t know who’s going to win that race. Because at some point, the pioneer doesn’t win. The last person that is big enough to the market typically takes the market. So I’m not sure. It looks like for now, OpenAI is innovating slightly faster than others. Google has a kind of a weird handicap where even if they’re really good at this, and even if they really want to win at this, and they probably do, they’re competing with their own business model. If Bard becomes as good as it can be, then Google loses almost $200 billion a year in search revenue. I don’t know that within the same company. It’s always possible to navigate this kind of interplay between a new technology and the old technology that is still the main cash cow. Most companies in history have failed to navigate that kind of transition. Maybe Google is substantially better than others. They are definitely taking more precautions and trying to segregate them into different departments so that they don’t actually talk to each other. And the guys who are making the money now don’t handicap the guys who are about to make money in the future. So I don’t know. And then there might be somebody who just comes along that we haven’t seen yet. I think that there is also a non-trivial possibility that this entire model of how we train large language models now. Actually Plateau at some point, and we can progress further without changing the paradigm. It’s still not clear that you can just keep going forever by increasing model sizes and having more and more parameters. It might just be that at some point, more size is not the solution. Until now, surprisingly, it has been the solution to every problem. Everything that you could think of to fine tune a model, to make it better for that specific task. Somehow, the next generation of the generalist model that was just bigger was better than that. I don’t know if that’s going to continue. We might hit the Plateau and have to change to a different direction.
JP McAvoy: And maybe a new paradigm. It’s interesting to say, and think of Google as an example. Certainly, there’s an existential conversation occurring as you say that all the web to traditional revenue is generated by search, and they understand that that’s all going to be completely changed. Now, they can either eat it themselves or be eating. So I think that’s what we’re gonna see with the evolution in Bard and what Google is doing. When you talk of that evolution and successive models, I hear a lot of people speaking as to the quality of the data. That’s what’s needed, and the inputs. So for OpenAI, we know the sources they’re using, and they’re gonna get into a licensing issue. I think, at some point in terms of what they seek to use in terms of data, Google does not have the best source of information relative to all the search history that it has.
Alex Fink: They certainly have a good source of information. It’s not clear how much of that is exclusive because it could be that most of the things that Google can access have also been accessed by Bing already. Even though Google indexes more of the web, it’s not clear that that delta is of high quality. It could just be that both of them got the best 4 billion pages, and Google indexed another 56 billion pages. But are they worth anything? It’s not clear. So there is an argument to be made for a specific advantage that some of the cloud services that haven’t really been scrapped have, or at least those whose terms of service don’t allow it. The yelps of the world, the corners of the world, things like that. They might have an advantage, or maybe whoever they license that information might have an advantage. But you mentioned information quality. I also have this somewhat contrarian view that maybe it will prove to be true, maybe it will prove to be false over time. We’ll see. But I think that maybe at some point, again, we plateau with the attempt to just feed more stuff into it and make it bigger. And we start thinking of ways to filter out the junk out of what we’re feeding into it. Because if you want to teach a child physics, you don’t tell them to go to the internet and read every forum that mentions the word physics. You give them three good textbooks, and then they know physics. It’s the selectivity, not the quantity that works for us humans. Until now, the approach and models have been better even if that tail end is bad. But maybe at some point, we actually learn how to rank content better. Now, there is a way to do it. But it’s kind of an ugly hack.
If you want a certain subset of the content, you’re feeding into the model when you’re training it to have a higher weight because it’s more reliable, you just multiply it. And so the model sees it more often when it gets trained. But it’s not the best way. And also, that’s very manual. I don’t know how you scale that to 60 billion pages. That going over things and the ranking, which one should be multiplied by two? Which one should be multiplied by three? Which one should be eliminated? I don’t know. I’ve seen examples where some of the hallucinations that come from chargeability are just from things that were an April Fool’s joke that somebody says once, and nobody said that the inverse. And therefore from the models perspective, that’s true because one is larger than zero. So if I saw it once and I didn’t see the opposite even once, then it must be true. And so it repeats that as if it were true. Somehow, I think the next iteration should learn to filter things like this. How that happens is a work in progress.
JP McAvoy: Work in progress. Almost circular. The way we began the conversation, the quality of the products that we consume, the content we consume. As you say, the quality of the data that we’re basing these models upon, you know what? That same thing, Alex. You’re talking about the test as the greatest AI company and just recognizing it as such. Yet you believe that’s true. We started talking about using all the driving data that’s being generated by Tesla on the road right now. What do you think of the Tesla approach to AI?
Alex Fink: They clearly have an advantage specifically in Computer Vision for automotive purposes because they have the largest data set. Everybody else doesn’t have a data set to work with, or they have a small data set and they’re trying to augment it with simulated data. Because you have to train your self driving or advanced autopilot, or whatever you call it. You have to train it on something. You need a lot of data so that every particular scenario gets represented enough in your data set. But most cars just haven’t had the cameras in them to gather that data for long enough. And so, yes, Tesla has an advantage. Yes, they have a reasonable approach to how they’re trying to build self driving. Am I sure that they have enough? I’m not. In fact, I’m not even sure that they’re purist approach of just saying, we don’t need radar. We don’t need LiDAR, we’ll just do it with visible light cameras. I don’t know if that works for all scenarios. It might, or they might one day decide that it doesn’t matter how much data they gather, and how perfectly they train everything. It’s just not enough because corner cases. I have a good friend who was a Tesla pioneer. He bought one of the very, very early model not (inaudible). He has to tell us and took out one of them in the rain in California three years ago and turned autopilot on. And it basically went into the middle separator on the 280 and totaled itself. I am not that big a believer, I’m more of a skeptic. Therefore, I own a Tesla that doesn’t have autopilot. I don’t want that.
JP McAvoy: Don’t want to add that way. That’s the big fear. We’re trying to build the best that we can. But there are limits. There’s still learning that’s occurring through this that it’s fascinating to watch for it to evolve. So many of the conversations we’re having these days are around AI, so appreciate having you on that, discuss it this way, and think of how it’s going to influence things. Another area that we delve into quite frequently is Blockchain and Crypto. You see the intersection there? Are you involved in that in any kind of way?
Alex Fink: Not really. I’m really interested in space. I was more interested at the peak of the hype cycle, everybody was. So I think that there’s definitely something interesting there. But I don’t really see how the two intersect or interact. If we get to a situation where AI replaces so many jobs, and now we have universal basic income and the entire financial system is augmented in some way or is upended in some way, then maybe it makes sense to change the monetary system to maybe that’s the intersection. But other than this edit score, Blockchain is just a way to store data on many different nodes without centralization. Okay, that’s a good development in its own right. How does that relate to the ability to generate things that typically humans do without the human involved, which is what AI is doing? Naturally, it’s two completely different directions. You might see scenarios where the two get combined into one product, but I don’t think they’re competing. I don’t think they’re helping each other. They’re kind of orthogonal.
JP McAvoy: Yeah. And I think it’s that intersection where that intersection occurs, there’s potentially a great deal of value. So for those entrepreneurs that are solving, that are finding creative ways of utilizing the strength and both, I think that that is where the real opportunity lies. And we perhaps don’t even know what that looks like. Or I’m sure we don’t know what that looks like, at this point, even from the way these conversations are going. An interesting thought as we were to ponder what it might look like in five years, similar question. I guess, predicting where it might be or what it might look like, there has to be opportunities to ask the question. I know certain people are working on things, and we’ll see if that continues to evolve. Now, where does the Otherweb go from here? Now, if we were having that same kind of conversation, I could be forward thinking, so continue reading the model and improving there as well, where do you see others in a couple of years time as well?
Alex Fink: Models are improving. The user base is growing. We’ve crossed 7 million users already across different platforms. The goal is to get to the hundreds, I don’t know if we can appeal to the entire market, because quality is a proposition that only appeals to some people. Whole Foods is not the largest retailer in the US. It’s large, but it’s not the largest. So I think we’re the whole foods for inflammation and a sense, and we appeal to a certain subset of the population. I think we can move beyond just the consumption side and try to affect the distribution and the monetization, and maybe even the creation. Maybe create tools for writers to write better and to commit less of the things that we then penalize on the consumption side. Maybe help advertisers figure out what their ads are going to appear on and change how much they’re willing to pay based on the quality of the underlying page. Maybe more things in that direction. But in general, we have a mission. The mission is to improve the quality of information people can see anything that we can do to further that something we’re willing to pivot.
JP McAvoy: It’s a wonderful mission and a great goal to have. The user base is increasing even for those that are participating, every single person that benefits from it is incrementally better. How do people use it? What’s the way to get involved or start consuming the product?
Alex Fink: So you can go to otherweb.com, or you can download the app called Otherweb from Google Play Market or the App Store. The main page is a news aggregator. You configure everything in it, so it gives you the exact feed that you want. It gives you a short bullet point summary of every single article so you don’t have to read the entire thing. You have a few bullet points, you have a nutrition label next to each one so you can see what our models determined about it. And then those two things together can tell you whether it’s worth consuming from your perspective. But we’re trying to add more and more things beyond just news. So we already have commentary, research studies, podcasts, we keep adding more and more things to it. And with each of them, we try to apply our methods for improving the quality. We just launched yesterday. So this is brand new, fresh baked out of the oven. A discussions feature that essentially lets you discuss the news where you read it. It looks a lot like Reddit, internally, but it has some other unique features that are special to us. And again, we think that one of the big benefits is that if a discussion always starts from a high quality article, it is less likely to devolve into a pure flame war. Whereas if anybody can just do a drive by and shout something into the ether, then flame wars become a very large percentage of what gets represented on the platform. So discussions are, I think, a big thing for how people actually interact with news. And I think they increase the chances that people will discuss the news where they read them. Because right now, it’s kind of bifurcated. You see a discussion thread on Twitter has a link, you click on the link, go to the New York Times, read it, go back to Twitter to say what you think. And that’s an odd consumption pattern. It’s also not great for the New York Times, I’m pretty sure. So hopefully, we can help improve that and maybe then partner up with the news publishers themselves. So maybe our discussions are reflected on their pages, and their pages are reflected in our discussions.
JP McAvoy: Great stuff. And for those listening, Otherweb certainly worth a head look. As you say, we’re spending time thinking of how to spend our time, it was a decent product. It consumes things that are worthy of our time and attention. Alex, thank you so much for this. If people wanted to get in touch with you specifically, what’s the best way to reach you?
Alex Fink: alex@otherweb.com. At some point, I probably have to stop doing this. But for now, that’s my email.
JP McAvoy: There you go. There you go. Thanks so much for this, and really appreciate having you on. I look forward to seeing you in the next five years. And hopefully, some of the things we discussed come to fruition. Hopefully, the quality of things continue to improve. That’s where we want to see things. Go appreciate the mission. I’d like to add any shows with one thing that people can take with them. One thing that’s worked for you in the past. I see over your shoulder a note saying, what no great thing is created suddenly no surprise. You’re the type of person who obviously puts his head down and solves problems. You’ve been doing that for years. If the people listening were to offer one or several things that have worked for you, what would you say to them as we parted ways today?
Alex Fink: I guess at some point, I have this mindset, shift and how I make decisions. And that I realized that whether I’m playing a game like poker, or chess or anything like that, or whether I’m giving somebody advice, usually I look at the situation and just try to ask myself, what is the best move? What is the best thing to do in this situation? But then, when we act in their own lives, we typically don’t do that. We follow an urge or other people’s expectations or social pressure, or we try to avoid some temporary discomfort. We don’t actually ask ourselves this question, what’s the one thing that works for me? I have it on my bracelet here, what is the best move? And as long as my action is always the answer to that question, at least I will be acting to the best of my own abilities.
JP McAvoy: That’s great stuff. A great move for being on the show here today. Do appreciate having you, Alex, please keep in touch.
Alex Fink: Thanks, JP.
Recent Comments