• DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    I think less restrictive AI that are free, like Venice AI (you can ask it pretty much anything and it will not stop you) will be around for longer than ones that went with restrictive subscription models, and that eventually those other ones will become niche.

    New technology always propagates further the freer it is to use and experiment with, and ChatGPT and OpenAI are quite restrictive and money hungry.

    • ugjka@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It will burst because no one is going to pay subscription fee for every AI gizmo every app puts in your phone. The way they make any money now is just funneling more and more vc money in exchange of AGI promise (coming soon)

    • Liz@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Same thing happened to the Dot Com bubble. The fundamental technology has valid uses, but we’re in the stage where some people are convinced it can be used for literally anything.

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    As someone who follows the startup space (and is thinking of starting their own, non-AI driven startup), the issue is all of the easily solvable problems have already been solved. The only thing that shakes up the tree is when new tech comes along and makes some of the old problems easy to solve.

    So take a look at crypto - If you wanted to make a tip bot on Telegram, before crypto that was really hard. You needed to register with something like PayPal, have the recipient register with PayPal, etc etc etc. After crypto it was “Hey this person sent you 5$, use this private key if you want to recover it” (btw I made this service and it was used a lot).

    Now look at AI - Imagine making a service that detects CSAM before AI took off. As an aside, I did NOT make this service, but I know a group of people who did. Imagine trying to make this without the AI boom - you’d need millions of images for training data, a PhD in machine learning, and so much more. Now, anyone can make it in their basement.

    The point is, investors KNOW the bubble is a bubble and that it will pop. It doesn’t matter though. They’re looking for people who will solve problems that previously cost 1bln to solve with only 1mln of funding. If even 1% of their companies pay off, they make a profit.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      bubble after bubble after bubble after…

      problem is, the amount of soap(money) that goes around to make the bubbles keeps shrinking because the bubbles are siphoning it away from the consumers.

      I wonder what happens when there’s no more soap left to go around?

    • MajorHavoc@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      If even 1% of their companies pay off, they make a profit.

      I suspect they make a profit even when 0% pan out. They just need to find someone gullible enough to buy in at the peak, and there’s a new sucker born every minute.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      “probably 1% of the companies will stand out and become huge and will create a lot of value, or will create tremendous value for the people, for society. And I think we are just going through this kind of process.”

      Baidu is huge. Sounds like good news for Baidu!

  • EmperorHenry@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Yeah, AI is really just a surveillance tool than anything else.

    When AI “creates” something, it’s just pulling up things related to words you typed in and making an amalgamation of what you typed in out of what it has.

    The real purpose is for corporations and governments to look through people’s devices and online storage at super speed.

    this is why you all need to be using end-to-end encrypted storage for everything and VPNs with perfect forward secrecy

    do your own research into the history of each provider of those things before you buy it

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      There is so much wrong with this…

      AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.

      You’re most likely talking about “Chat Control”, which is a controversial EU proposal to scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.

      You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    If you’re invested in these stocks, make sure you have your stop loss orders in place, 100%.

    I imagine the bubble bursting will be quick and deadly.

  • don@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    They couldn’t keep their heads on fucking straight during the .com bubble, and here they are doing it all over again.

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    10 to 30? Yeah I think it might be a lot longer than that.

    Somehow everyone keeps glossing over the fact that you have to have enormous amounts of highly curated data to feed the trainer in order to develop a model.

    Curating data for general purposes is incredibly difficult. The big medical research universities have been working on it for at least a decade, and the tools they have developed, while cool, are only useful as tools too a doctor that has learned how to use them. They can speed diagnostics up, they can improve patient outcome. But they cannot replace anything in the medical setting.

    The AI we have is like fancy signal processing at best

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Not an expert so I might be wrong, but as far as I understand it, those specialised tools you describe are not even AI. It is all machine learning. Maybe to the end user it doesn’t matter, but people have this idea of an intelligent machine when its more like brute force information feeding into a model system.

      • RecluseRamble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Don’t say AI when you mean AGI.

        By definition AI (artificial intelligence) is any algorithm by which a computer system automatically adapts to and learns from its input. That definition also covers conventional algorithms that aren’t even based on neural nets. Machine learning is a subset of that.

        AGI (artifical general intelligence) is the thing you see in movies, people project into their LLM responses and what’s driving this bubble. It is the final goal, and means a system being able to perform everything a human can on at least human level. Pretty much all the actual experts agree we’re a far shot from such a system.

        • BallsandBayonets@lemmings.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          It may be too late on this front, but don’t say AI when there isn’t any I to it.

          Of course it could be successfully argued that humans (or at least a large amount of them) are also missing the I, and are just spitting out the words that are expected of them based on the words that have been ingrained in them.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            AI as a field of computer science is mostly about pushing computers to do things they weren’t good at before. Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.

            Along the way, it created a lot of really important tools. Things like optimizing compilers, virtual memory, and runtime environments. The way computers work today was built off of a lot of things out of the old MIT CSAIL labs. Saying “there’s no I to this AI” is an insult to their work.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Recognizing colored blocks in an image was AI until someone figured out a good way to do it. Playing chess at grandmaster levels was AI until someone figured out how to do it.

              You make it sound like these systems stopped being AI the moment they actually succeeded at what they were designed to do. When you play chess against a computer it’s AI you’re playing against.

              • frezik@midwest.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                7 months ago

                That’s exactly what I’m getting at. AI is about pushing the boundary. Once the boundary is crossed, it’s not AI anymore.

                Those chess engines don’t play like human players. If you were to look at how they determine things, you might conclude they’re not intelligent at all by the same metrics that you’re dismissing ChatGPT. But at this point, they are almost impossible for humans to beat.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  I’m not the person you originally replied to. At no point have I dismissed ChatGPT.

                  I disagree with your logic about the definition of AI. Intelligence is the ability to acquire, understand, and use knowledge. A chess-playing AI can see the board, understand the ramifications of each move, and respond to how the pieces are moved. That makes it intelligent - narrowly so, but intelligent nonetheless. And since it’s artificial too, it fits the definition of AI.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Intelligence: The ability to acquire, understand, and use knowledge.

            A self-driving car is able to observe its surroundings, identify objects and change its behaviour accordingly. Thus a self-driving car is intelligent. What’s driving such car? AI.

            You’re free to disagree with how other people define words but then don’t take part in their discussions expecting everyone to agree with your definiton.

          • celliern@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            This is not up to you or me : AI is an area of expertise / a scientific field with a precise definition. Large, but well defined.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      LLM’s are not the only type of AI out there. ChatGPT appeared seemingly out of nowhere. Whose to say the next AI system wont do that as well?

      • peopleproblems@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        ChatGPT did not appear out of nowhere.

        ChatGPT is an LLM that is a generative pre-trained model using a nueral network.

        Aka: it’s a chat bot that creates it’s responses based on an insane amount of text data. LLMs trace back to the 90s, and I learned about them in college in the late 2000s-2010s. Natural Language Processing was a big contributor, and Google introduced some powerful nueral network tech in 2014-2017.

        The reason they “appeared out of nowhere” to the common man is merely marketing.

              • peopleproblems@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                LLM’s are not the only type of AI out there. ChatGPT appeared seemingly out of nowhere. Whose to say the next AI system wont do that as well?

                I’m not sure what I’m misquoting. A large language model is not AI, a large language model is a non-human readable function used by a generative AI algorithm.

                Simply put, ChatGPT did not appear out of nowhere.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  ChatGPT did not appear out of nowhere

                  I agree.

                  The key word there is seemingly. The technology itself had existed for a long time, but it wasn’t until the massive leap OpenAI made with it that it actually became popular. Before ChatGPT, 99% of people had never heard of LLMs, and now everyone has. That’s what I mean when I say it appeared seemingly out of nowhere - it took the masses by surprise. There’s no reason to assume another company working on a different approach to AI won’t make a similar massive breakthrough, giving us AI far more powerful than LLMs and taking everyone by surprise, despite the base technology having existed for a long time.

                  A large language model is not AI

                  It is AI though - a subset of generative AI to be specific, but it still falls under the AI category.

      • Vritrahan@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Anything can happen. We can discover time travel tomorrow. The economy cannot run on wishful thinking.

        • lennivelkant@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          It can! For a while. Isn’t that the nature of speculation and speculative bubbles? Sure, they may pop some day, because we don’t know for sure what’s a bubble and what is a promising market disruption. But a bunch of people make a bunch of money until then, and that’s all that matters.

          • Vritrahan@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            The uncertainty of it is exactly why it shouldn’t suck up as much capital and resources as it is doing.

              • Vritrahan@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                I agree, and the problem is finance capitalism itself. But then it becomes an ideological argument.

                • knightly the Sneptaur@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  The argument could be made economically rather than ideologically.

                  Capitalism has a failure mode where too much capital gets concentrated into too few hands, depressing the flow of money moving through the economy.

                  But Capitalists start crying “Socialism!” as soon as you start talking about anti-trust.

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

      The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

      I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.