• Ghast@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 years ago

    I hope so, and it’s sort-of the aim.

    Hume and Locke’s writings are often on ‘Philosophy of Mind’ - a subject which wasn’t at the time anything like a science. Nowadays, lots of the mind is squarely under the purview of neuroscience, or psychology.

    Utilitarianism was always a branch hoping to involve science, and eventually become law. Bentham - the originator - stated this should be the methodology for writing laws. Instead of vague moral debates, we should answer the single question ‘what would bring the most utility to people?’.

      • Ghast@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 years ago

        A master’s degree in Philosophy, specializing in ethical theory.

        Take for example, the statement “he didn’t deserve that”. How do we find out if that’s true?

        Or we can look at the lack of epistemological grounding. If I bet you €5 that some building is taller than another, we can go online, find out who’s right, and the money’s paid out.

        Now imagine I bet you that fur clothing is always morally wrong. How could the money get paid out? What evidence would make a publicly available conclusion?

        • Cold Hotman@nrsk.no
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 years ago

          Moralism and ethics is difficult, but isn’t even the question “what would bring the most utility to people?” in the spirit of Bentham a subjective one depending on what one feels about something? What gives you happiness or benefit could cause me immense grief and put me at an disadvantage, no?

          • Ghast@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            2 years ago

            Right - the idea’s not to conclude with ‘tomatoes bring utility - let’s make tomatoes’. The idea’s to maximize total utility, given a population with different values.

            • Cold Hotman@nrsk.no
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              I’m very interested in this topic, how would you define the maximal total utility for a group with different values? And is there a limit to optimization for a group before it starts coming at a cost for subgroups?

              I understand that it’s easy to revert to an argument of a homogeneous group, but unless everyone is identical - even the slightest difference could lead to large splits. In a global perspective, the difference between i.e. catholics and protestants are comperatively small yet some experience a large divide.

              • Ghast@lemmy.ml
                link
                fedilink
                arrow-up
                2
                ·
                2 years ago

                I’m very interested in this topic, how would you define the maximal total utility for a group with different values?

                I’ll try to condense what I’ve read with some bullet points:

                • Utilitarian theory has a lot to say about why this is possible in principle, but it’s not always possible in practice. It’s similar to physics, where everything has mass, but actual measurement is difficult (except Utilitarian theory isn’t nearly as developed as any branch of Physics).
                • Actual methodologies typically use Economics tools, such as Game Theory. There’s another hurdle to implementation - we can calculate a ‘fair’ wage for everyone in a corporation, depending on what they contribute, but the actual formula’s difficult to compute once you get beyond about 10 people (or at least 10 roles).

                even the slightest difference could lead to large splits.

                Yes - every difference in someone’s individual utility mappings can affect a given decision, but it’s not all that crazy once you look at real-world examples.

                And is there a limit to optimization for a group before it starts coming at a cost for subgroups?

                Yes - and utilitarians won’t add any suggestions on where to take the split.

                5 people want to go to the cinema. 2 of them love Marvel, 1 hates Marvel. The currently playing films are ...

                Mathematically, this example threatens to become insanely challenging, but we make these decisions every day, so clearly we’re making some attempt to maximize utility, even if we’re not 100% successful.

                In a global perspective, the difference between i.e. catholics and protestants are comperatively small yet some experience a large divide.

                This is an easy one - don’t take global perspectives when making decisions, unless it’s a question with a super-homogenous answer like ‘should people get stabbed by rabbid monkeys?’.

        • altair222@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          I think no one is being vague except for you. Before even saying “he didn’t deserve that”, anyone from a philosophical background would ask a thousand questions to you, starting with “he who?”, “what happened to him”, “he did what?”

          • Ghast@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            So you’re saying speach about Ethics isn’t vague, because someone who’s studied philosophy would ask one thousand questions about the situation. Is that what you’re saying?