Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

  • flerp@lemm.ee
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    4 months ago

    Like any tool, it depends how you use it. I have been learning a lot of math recently and have been chatting with AI to increase my understanding of the concepts. There are times when the textbook shows some steps that I don’t understand why they’re happening and I’ve questioned AI about it. Sometimes it takes a few tries of asking until you figure out the right question to ask to get the right answer you need, but that process of thinking helps you along the way anyways by crystallizing in your brain what exactly it is that you don’t understand.

    I have found it to be a very helpful tool in my educational path. However I am learning things because I want to understand them, not because I have to pass a test and that determination in me to want to understand is a big difference. Just getting hints to help you solve the problem might not really help in the long run, but it you’re actually curious about what you’re learning and focus on getting a deeper understanding of why and how something works rather than just getting the right answer, it can be a very useful tool.

    • Rekorse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      4
      ·
      4 months ago

      Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?

      We’ve all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?

      • GaMEChld@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        You check it’s work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it’s work.

        • flerp@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Exactly. It’s a helpful tool but it needs to be used responsibly. Writing it off completely is as bad a take as blindly accepting everything it spits out.

      • pflanzenregal@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I use it for explaining stuff when studying for uni and I do it like this: If I don’t understand e.g. a definition, I ask an LLM to explain it, read the original definition again and see if it makes sense.

        This is an informal approach, but if the definition is sufficiently complex, false answers are unlikely to lead to an understanding. Not impossible ofc, so always be wary.

        For context: I’m studying computer science, so lots of math and theoretical computer science.

      • flerp@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I’m not at all confident in the answers directly. I’ve gotten plenty of wrong answers form AI and I’ve gotten plenty of correct answers. If anything it’s just more practice for critical thinking skills, separating what is true and what isn’t.

        When it comes to math though, it’s pretty straightforward, I’m just looking for context on some steps in the problems, maybe reminders of things I learned years ago and have forgotten, that sort of thing. As I said, I’m interested in actually understanding the stuff that I’m learning because I am using it for the things I’m working on so I’m mainly reading through textbooks and using AI as well as other sources online to round out my understanding of the concepts. If I’m getting the right answers and the things I am doing are working, it’s a good indicator I’m on the right path.

        It’s not like I’m doing cutting edge physics or medical research where mistakes could cause lives.

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Its sort of similar to saying poppy production overall is pretty negative, but if smart critical people use it sparingly and apprehensively, opiates could be of great benefit to that person.

          Thats all well and good and all but AI is not being developed to help critical thinkers research slightly easier, its being created to reduce the amount of money companies spend on humans.

          Until regulations are in place to guide the development of the technology in useful ways then I dont know any of it should be permitted. What’s the rush for anyways?

          • flerp@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            Well I’m definitely not pushing for more AI and I like to try to stay nuanced on the topic. Like I mentioned in my first comment I have found it to be a very helpful tool but if used in other ways it could do more harm than good. I’m not involved in making or pushing AI but as long as it is an available tool I’m going to make use of it in the most responsible way I can and talk about how I use it knowing that I can’t control what other people do but maybe I could help some people who are only using it to get answer hints like in the article to find more useful ways of using it.

            When it comes to regulation, yeah I’m all for that. It’s a sad reality that regulation always lags behind and generally doesn’t get implemented until there’s some sort of problem that scares the people in power who are mostly too old to understand what’s happening anyways.

            And as to what’s the rush, I would say a combination of curiosity and good intentions mixed with the worst of capitalism, the carrot of financial gain for success and the stick of financial ruin for failure and I don’t have a clue what percent of the pie each part makes up. I’m not saying it’s a good situation but it’s the way things go and I don’t think anyone alive could stop it. Once something is out of the bag, there ain’t any putting it back.

            Basically I’m with you that it will be used for things that make life worse for people and that sucks, and it would be great if that was not the case but that doesn’t change the fact that I can’t do anything about that and meanwhile it can still be a useful tool and so I’m going to use it the best that I can regardless how others use it because there’s really nothing I can do except keep pushing forward the best I can, just like anyone else.

            • Rekorse@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              It might just be the difference in perspective. I agree with your assessments if how things are but not how they will be in the future. There are countries that are more responsible in their research, so I know its possible. Its all politics and I dont believe in giving up on social change just yet.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        I personally use it’s answers as a jumping off point to do my own research, or I ask it for sources directly about things and check those out. I frequently use LLMs for learning about topics, but definitely don’t take anything they say at face value.

        For a personal example, I use ChatGPT as my personal Japanese tutor. I use it discuss and break down nuances of various words or sayings, names of certain conjugation forms etc. etc., and it is absolutely not 100% correct, but I can now take the names of things that it gives me in native Japanese that I never would have known and look them up using other resources. Either it’s correct and I find confirming information, or it’s wrong and I can research further independently or ask it follow up questions. It’s certainly not as good as a human native speaker, but for $20 a month and as someone who likes enjoys doing their own research, I fucking love it.

        • obbeel@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Hey, that’s a cool thing to do! I’ll try it. Learning a new language through LLMs sounds cool.

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            It is! Just be aware that it won’t always be right. It’s good to verify things with additional sources (as with anything, really).

      • Buttons@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        I, like the OP, was also studying math from a textbook and using GPT4 to help clear things up. GPT4 caught an error in the textbook.

        The LLM doesn’t have a theory of mind, it wont start over and try to explain a concept from a completely new angle, it mostly just repeats the same stuff over and over. Still, once I have figured something out, I can ask the LLM if my ideas are correct and it sometimes makes small corrections.

        Overall, most of my learning came from the textbook, and talking with the LLM about the concepts I had learned helped cement them in my brain. I didn’t learn a whole lot from the LLM directly, but it was good enough to confirm what I learned from the textbook and sometimes correct mistakes.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        I mean, why are you confident the work in textbooks is correct? Both have been proven unreliable, though I will admit LLMs are much more so.

        The way you verify in this instance is actually going through the work yourself after you’ve been shown sources. They are explicitly not saying they take 1+1=3 as law, but instead asking how that was reached and working off that explanation to see if it makes sense and learn more.

        Math is likely the best for this too. You have undeniable truths in math, it’s true, or it’s false. There are no (meaningful) opinions on how addition works other than the correct one.

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          The problem with this style of verification is that there is no authoritative source. Neither the AI nor yourself is capable of verifying for accuracy. The AI also has no expectation of being accurate or revised.

          I don’t see how this is any better than running google searches on reddit or other message boards looking for relevant discussions and basing your knowledge on those.

          If AI was enabling something new that might be worth it but allowing someone to find slightly less/more shitty message board posts 10% more efficiently isnt worth what’s happening. There are countries that are capable of regulation as a field fills out, why can’t america? We banned tiktok in under a month didnt we?

    • Gsus4@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 months ago

      Sometimes it leads me wildly astray when I do that, like a really bad tutor…but it is good if you want a refresher and can spot the bullshit on the side. It is good for spotting things that you didnt know before and can factcheck afterwards.

      …but maybe other review papers and textbooks are still better…