Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CT
ChubakPDP11+TakeWithGrainOfSalt @ ChubakPDP11 @programming.dev
Posts
0
Comments
70
Joined
1 yr. ago

  • This statement is completely wrong. Like, to a baffling degree. It kinda makes me wonder if you’re trolling.

    No I just struggle at getting my meaning across + these stuff are new to me. What I meant was 'Go does memory management LIKE a VM does'. Like 'baking in the GC'. Does that make sense? Or am I still wrong?

  • I know about all this --- I actually began implementing my own JVM language a few days ago. I know Android uses Dalvik btw. But I guess a lot of people can use this info; infodump is always good. I do that.

    btw I actually have messed around with libgcc-jit and I think at least on x86, it makes zero difference. I once did a test:

    -- Find /e/ with MAWK -> 0.9s -- Find /e/ with JAWK -> 50s.

    No shit! It's seriously slow.

    Now compare this with go-awk: 19s.

    Go has reference counting and heap etc, basically a 'compiled VM'. I think if you want fast code, ditch runtime.

  • True, but see, all these build up on the theoretical regex. The theoretical regex indeed has only 3 operators: dot for concatenation, pipe for alteration and the 'kleene star' [re: Sipser's]. These 3 operators can express a finite state automata. You don't really need all that other operators. Read this: https://swtch.com/~rsc/regexp/regexp1.html Algorithms like Thompson construction can translate a regex to a non-deterministic automata quite quickly, and then from there you can make a DFA or just simulate the NFA.

    I would not call PCRE 'regular expression' really. If you read the article I gave you it explains why they are mostly a practical utility than a theoretical groundwork. The regex in use today is far from the regex one learns about in books.

    I think regex is abused. People use it to parse context-free grammars. Extremely intricate and feature-rich patterns cause people to make mistakes, and end up with security holes!

    That being said, I really enjoy NeoVim's regex. I also like Perl's. But I would not rely on them for parsing, or even lexing.

  • Cool, as I just said, Rust is more of a 'fandom' than a 'compiler' really, it's also not much of a 'language'. I use C because it's standardized by ISO, not some basement-dwelling incels who keep RFCi'ing their ideas instead of implementing it their own.

  • (Sorry if this is a double post) I think what you call 'decoration' I call 'augmentation;. After many iterations ,I came up with this AST: https://pastebin.com/NA1DhZF2

    I like it, but I am still skeptical as of comitting to it. I really want to use Functors... But i'm nto sure how. These are new concepts to me!

  • The advice offered by Steele in this video no longer applies. It's still a bit more up-to-date than Kernighan's talk of a similar title. The fun of this video is in how he twists the English language. He's truly an erudite man.

    The reason this advice no longer applies is, that I, as a person trying to enter the world of langdev, at least personally, see no reason on defining a new language. I think we should find new ways to describe languages we already have i.e. implement them.

    I am currently making a C compiler in OCaml, besides some other languages. I just began work on the AST tags. I somehow decided to use SSA versioning here.

    But descring a new compiler for C, it's blaze. I only do it because C is easy, at least non-intrinsicaly. For example, there's no automatic GC. There are no first-class functions, or function literals (aka 'lambas', although this term is massively 'disused' --- function literals are one thing, lambda literals are one thing, lambda expressions are one thing, function expressions are another thing and so on and so forth -- and I don't have classical education, I just managed to understand that imperative languages abuse the term 'lambda' to a dangerous degree. They have named their literals after the concept which it derives from ,it's like calling binary computers, not the concept, the literal thing, 'Neumann machines, right? Like, go to Best Buy and say 'Give me this Neumnan machine to play games on'! Maybe that's because I am too uneducated that I think that, anyways).

    Besides that, there are:

    1- Too many old languages that could use a new veneer, like a SML compiler that uses MLIR or LLVM. 2- There are too many interpreted languages that could use being jitted. Like Ruby. Not sure if there's jitted Ruby, but I just discoevered how sweet it is and I like a faster version 3- We could dig a mass grave and bury every Python user alive, after torturing them (I'm kidding! lol)

    So I'm not very educated, I brute-force. I rely on ChatGPT models to spit facts at me, or give me validation on my work because I kinda need a 'college simulator'. Like, I figure, I don't have nay peers so let's make this bot my peer.

    In the realm of DSLs, let's look at a successful example of 're-description': Fish.

    Fish is truly a marvel. Ever since I switched to it, you can't beleive how faster I work. I don't know if there were interactive-friendly shells before Fish, but Fish is 'Friendly' you know?

    I am implementing my own shell too.

    I dunno man. I'm just rambling.

    Thanks.