• 0 Posts
  • 4 Comments
Joined 10 months ago
cake
Cake day: June 30th, 2025

help-circle
  • NotANumber@lemmy.dbzer0.comtoScience Memes@mander.xyzWhat would you do?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    12 days ago

    Did you actually read what I wrote or the context behind it? I don’t think you did.

    I am saying not everyone wants to learn these things just for the sake of it. Some people want to learn the parts of maths that are more practical and want to be given practical examples. I don’t see a problem with accommodating those students or looking down on people who think that way like the original commenter was doing.

    What I am not saying is that no one should learn any maths at all. I don’t know how you got that from my comment. It’s like you are deliberately trying to misinterpret what I am saying.


  • NotANumber@lemmy.dbzer0.comtoScience Memes@mander.xyzWhat would you do?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    12 days ago

    Did you actually read what I wrote or the context behind it? I don’t think you did. I am saying not everyone wants to learn these things just for the sake of it. Some people want to learn the parts of maths that are more practical and want to be given practical examples. I don’t see a problem with accommodating those students or looking down on people who think that way like the original commenter was doing.



  • NotANumber@lemmy.dbzer0.comtoScience Memes@mander.xyzSquiggly Boie
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    To be more specific this is an MLP (Multi-Layer Perceptron). Neural Network is a catch all term that includes other things such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Diffusion models and of course Transformers.

    What you are arguing online is some variant of a Generative Pre-trained Transformer, which do have MLP or MoE layers but that’s only one part of what they are. They also have multi-headed attention mechanisms and embedding + unembedding vectors.

    I know all this and wouldn’t call myself a machine learning expert. I just use the things. Though I did once train a simple MLP like the one in the picture. I think it’s quite bad calling yourself a machine learning expert and not knowing all of this stuff and more.