• 1 Post
  • 51 Comments
Joined 6 months ago
cake
Cake day: July 10th, 2025

help-circle


  • Wanting to learn.

    I was apart of a very orthodox religious school who’s main appeal is their dual curriculum setup. They start with judaics at 7 am with secular studies from 12:40 pm to 5 pm and then more judaics from 7 - 9 pm. I never had a proper education and when I went to the secular education principle to be put back into arithmetic, she got quite pissed at me and mocked me for even wanting this. I went on a complete and utter fucking tangent not going to school until they fucking put me in arithmetic.

    After I somehow graduated in 2022 (with absolutely fucking nothing – no diploma or anything) i went to community college and finished multivariate calculus from arithmetic within 2 years span. Shit I even graduated with a degree in mathematics.

    I recently went back and started substituting there during my winter break off of university, and it has effectively become so fucking shitty that while I was subbing, the entire day was spent drawing… fucking drawing.

    These aren’t kindergartners, these are 6th, 7th, and 8th graders, they should be learning algebra at this point, and they’re just familiarizing themselves with exponents. The 8th grade doesn’t even have a teacher to teach fucking math.

    With all that said, I sent an email to the principle implying a dillema, “either you can bring the secular program up to standard, or I will cut it” the principle never responded and I may have no choice but to send a cease and desist letter as a more formal course of action and an additional warning if they don’t cooperate.










  • As far as I’m concerned the generative AI that we see in chatbots has no goal associated with it: it just exists for no purpose at all. In contrast to google translate or other translation apps (which BTW still use machine learning algorithms) have a far more practical use to it as being a resource to translate other languages in real-time. I don’t care what companies call it (if it’s a tool or not) at the moment its a big fucking turd that AI companies are trying to force feed down our fucking mouth.

    You also see this tech slop happening historically in the evolution of search engines. Way before we had recommendation algorithms in most modern search engines. A search engine was basically a database where the user had to thoughtfully word its queries to get good search results, then came the recommendation algorithm and I could only imagine no one, literally no one, cared about it since we could already do the things this algorithm offered to solve. Still, however, it was pushed, and sooner than later integrated into most popular search engines. Now you see the same thing happening with generative AI…

    The purpose of generative AI, much like the recommendation algorithm is solving nothing hence the analogy “its just a big fucking turd” is what I’m trying to persuade here: We could already do the things it offered to solve. If you can see the pattern, its just this downward spiraling affect. It appeals to anti intellectuals (which is most of the US at this point) and google and other major companies are making record profit by selling user data to brokers: its a win for both parties.


  • skiplists are interesting data structures. The underlying mechanism is it’s a 2-dimensional probabilistic linked list with some associated height ‘h’ that enables skipping of nodes through key-value pairs. So, compared to a traditional linked list that uses a traversal method to search through all values stored. A skip list starts from the maxLevel/maxheight, determines if “next” points to a key greater than the key provided or a nullptr, and moves down to the level below it if it is. This reduces the time complexity from O(1) with a linked list to O(N) where N Is the maxLevel.

    The reason behind why its probabilistic (in this case using a pseudo random number) is because its easier to insert and remove elements, otherwise (if you went with the idealized theoretical form) you would have to reconstruct the entire data structure each and every time you want to add/remove elements.

    In my testing when adding 1,000,000 elements to a skiplist it reduced from 6s search with a linked list to less than 1s!



  • That AI (as in “generative AI”) helps in learning if you give it the right prompt. There is evidence to support that when a user asks AI to implement code, that they (the user) won’t touch it because they are unfamiliar of the code it generated. The AI effectively made a psychological black box that no programmer wants to touch even for a (relatively speaking) small snippet of code to a larger program, that was programmed by another programmer or him.

    To further generalize, I fully believe AI doesn’t improve the learning process, it makes it more accessible and easier for less literate people in a field to understand. I can explain Taylor expansions and power series simplistically to my brother who is less literate and familiar with math. I would be shocked that after a brief general overview he can now approximate any function or differential equation.

    Same applies with chatGPT: You can ask it to explain simplistically taylor and power series solutions, or better yet, approximate a differential equation, it doesn’t change the fact that you still can’t replicate it. I know I’m talking about an extreme case where the person trying to learn Taylor expansions has no prior experience with math, but it still won’t even work for someone who does…

    I want to pose a simple thought experiment of my experience using AI on say (for example) taylor expansions. Lets assume i wants to learn Taylor expansion, ive already done differential calculus (the main requirement for taylor expansions) and I asks chatGPT “how to do Taylor expansions” as in what is the proof to the general series expansion, and show an example of applying Taylor expansions to a function. What happens when I try and do a problem is when I experience a level of uncertainty in my ability to actually perform it, and this is when I ask chatGPT if i did it correct or not. But you sort of see what I’m saying it’s a downward spiral of loosing your certainty, sanity, and time commitment over time when you do use it.

    That is what the programmers are experiencing, it’s not that they don’t want to touch it because they are unfamiliar with the code that the AI generated, it’s that they are uncertain in their own ability to fix an issue as they may fuck it up even more. People are terrified of the concept of failure and fucking shit up, and by using AI they “solve” that issue of theirs even though the probability of it hallucinating is higher then if someone spent time figuring out any conflicts themselves.


  • Privacy reasons. More specifically, I just don’t like using platforms when there are alternatives that don’t compromise my data. In the end, I don’t lose as many features or communities going this route. That said, however, I do miss shitting on people who joined the “christian V atheist” Facebook it’s one of my guilty pleasures. These people can’t have a logical debates, and often times just completely unrelated to Christianity or atheism. so I end up just personally insulting them.