• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

High vs Low Church of AI

Infinite Chaos

DP Veteran
Joined
Oct 28, 2007
Messages
26,433
Reaction score
23,321
Gender
Male
Political Leaning
Independent
"On one side: trillion-dollar tech giants burning billions in pursuit of Artificial General Intelligence (AGI). Their strategy is
powered by massive data centers, moonshot ambitions, and a near-religious belief in scale. This is the High Church of AI.

On the other: a scrappy, fast-moving ecosystem of startups, indie hackers, and open-source developers, quietly using

the fallout from that race to build practical, profitable, often boring businesses. This is the Low Church of AI."

Paid Article

I can't gift articles on Medium but I hope there's enough in the elements quoted for anyone interested.

Certainly, I believe the tech giants are going all in on "AI" to make it work but I also see far below them, the smaller developers and idealists making smaller scale Apps and tools that utilise some of the elements of AI. In short - the article writer believes the "low church" of AI is winning, that the billions and effort being put in are unsustainable whereas the bits that don't work in the way the major giants want or hope can be utilised by others with far less grandiose ambitions.

What do you think?
 
The little successes will turn into the next giants. Which will inspire the next wave of little innovations. etc.
 
I don't agree, or put perhaps differently, I think the article -- or at least the part of it that you shared -- sets up a false dichotomy and I would set up a different one.

Instead, I would say that the large tech companies are pursuing open-ended reasoning models intended to outperform what we'd classify as traditional examples of practical human intelligence e.g. scientists, engineers, mathematicians. The smaller companies are focused on more targeted solutions that focus less on "intelligence" and more on automation and digestion. So, the Gemini or Meta model can solve a complex mathematical, medical or logical problem or puzzle, while the smaller models focus on specific tasks like creating images or summarizing text.
 
The little successes will turn into the next giants.

Some of those 9000 Microsoft employees that got laid off and replaced by AI will need to start creating the little successes. I just wonder where this is all going.


Out of curiosity, I can see how Microsoft is tying up AI in the corporate world (especially in Europe with GDPR rules helping them) but what are Meta and Gemini getting out of AI? I don't think they are the only ones working on machine intelligence and I suspect the smaller, more agile sector might pull some surprises - unless the giants siimply buy up any promising companies in the same way Meta bought WhatsApp and Instagram to swallow the competition.
 
Google is doing the same thing as Microsoft and I would argue more successfully. After all, experts in the field such as myself are paying for Gemini subscriptions while mooching on Microsoft’s stuff for free.

Meta? I have no idea.
 
Google is doing the same thing as Microsoft and I would argue more successfully. After all, experts in the field such as myself are paying for Gemini subscriptions while mooching on Microsoft’s stuff for free.

Meta? I have no idea.
Zuck is probably dreaming of the AI Metaverse, complete with legs.
 
Google is doing the same thing as Microsoft and I would argue more successfully.

There aren't many big companies or Governmental organisations in Europe using Google or Gemini. Europe's data border rules prevent us using Gemini or any other AI for any serious stuff within our organisations. I spent quite a bit of time researching all this for our AI policy and only Microsoft (at the moment) are guaranteeing data borders and data disposal are observed.


Now, you may know something we don't and Microsoft might be secretly training CoPilot on data and prompts we enter - I wouldn't put it above any of the big companies but we have to work with AI companies that observe these rules. The free version of CoPilot is disabled on our business machines and we are restricted to M365 Enterprise Copilot.

We've had to block Grammarly and other AI tools because the data is processed in the US and elsewhere outside of European rules.

After all, experts in the field such as myself are paying for Gemini subscriptions while mooching on Microsoft’s stuff for free.

Such is the way outside Europe.
 
Last edited:
I’td be great if these tech giants didnt try to shoehorn AI into literally everything lol. I rudely told Grok to go the **** away in my dad’s car.
 
I’td be great if these tech giants didnt try to shoehorn AI into literally everything lol. I rudely told Grok to go the **** away in my dad’s car.

I see elements of the .com bubble here, AI is the current buzzword and lots of people feel they have to get it or have it in a product.

It will all settle down and real and useful uses will come about.
 
The FDA has a drug approval AI and it's making up fake studies.

We're pretty ****ed.
 
I thought the church of AI was those weird rationalist cults that obsesses over Roko's basilisk and tries to appease their future AI overlord.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…