Hi folks, Daniel Rosenwasser from the TypeScript team here. We're obviously very excited to announce this! RyanCavanaugh (our dev lead) and I are around to answer any quick questions you might have. You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.
I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?
In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.
The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.
These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.
Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.
It sounds like it would be a great fit for e.g. Lua though.
The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.
Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.
Like others I'm curious about the choice of technology here. I see you went with Go, which is great! I know Go is fast! But its also a more 'primitive' language (for lack of a better way of putting it) with no frills.
Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)
Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?
Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.
By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.
We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.
This is a great response but this is "why is Go better than JavaScript?" whereas my question is "why is Go better than C#, given that C# was famously created by the guy writing the blog post and Go is a language from a competitor?"
C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.
C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.
As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???
Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."
Thanks for the link. I'm not fully convinced by Anders answer. C# has records, first class functions, structs, span. Much control and I'd say more than Go. I'd even say C# is much closer to TS than Go is. You can use records for the data structures. The only little annoyance is that you need to write the functions as static methods. So an argument for easy translation would lead to C#. Also, C# has advantages over Go, e.g. null safety.
Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.
For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):
- Go being the lowest level language that still has garbage collection
- Inline structs and other data structure expressiveness features
- Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.
An unpopular pick that is probably more low level than Go but also still has a GC: D. Understandable why you wouldn't pick D though. Its ecosystem is extremely small.
I think you D fans need to dogfood a startup based around it.
It's a fascinating language, but it lacks a flagship product.
I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.
Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.
I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.
Did my post come off as a fan? I directly criticized its ecosystem. It wouldn't be my first pick either. I was just making conversation that there are other options.
And AFAIK Symmetry Investments is that dogfood startup.
> "given that C# was famously created by the guy writing the blog post"
What is this logic? "You worked on C# years ago so you must use C# for everything"?
"You must dictate C# to every team you lead forever, no matter what skills they have"?
"You must uphold a dogma that C# is the best language for everything, because you touched it last"?
Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.
Yes, but C# is the Microsoft language, and I would say TypeScript is 2nd place Microsoft language (sorry F# folks - in terms of popularity not objective greatness of course).
So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.
If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?
funny you bring up this analogy. tons of auto manufacturers these days will license other mfgs' engines and use them in your cars. e.g. a fair number of Ford's cars have had Mazda engines and a fair number of Mazdas have had Ford engines.
I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.
It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.
> "So it's not just that the lead architect of C# is involved in the TypeScript changes."
Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:
https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"
Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.
> "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"
I'm struggling to understand how this is a bad look for Typescript. Do you mean that the specific choice of Go reflects poorly on Typescript, or just the decision to rewrite the compiler in a different non-TS language?
If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.
I think a lot of folks downplay the performance costs for the convenience of a shared code-base between the front and backend.
If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.
If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?
I actually really enjoy Go. Sure it has a type system I wish was more powerful with lots of weird corners ( https://100go.co/ ), but it also has REALLY GOOD tooling- lots of nice libraries, the compiler is fast, the editor tooling is rock solid, it's easy to add linters to warn you about many issues (golangci-lint), and releasing binaries and updating package repositories is super nice (Goreleaser).
> Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those?
as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.
GP's answer is a great answer to why Go instead of Rust, which u/no_wizard asked about. And the answer to that boils down to the need to traverse data structures in ways which Rust makes difficult, and the simplicity of a GC.
C# is a decently-designed language, but its first principles are being microsoft-y and java-y, which are perhaps two of my least favorite principles. that aside, i've worked on C# backends deployed to lots of linux boxes and it's not really second-rate these days.
Almost a decade? Amazing. Considering go has been cross platform since its inception almost twice as long as that, rust too, it’s no wonder developer mindshare is elsewhere.
This is Anders Hejlsberg, the creator of C#, working on a politically important project at Microsoft. That's what I mean by political benefit. The larger open source world doesn't matter for this decision which is why this is a simple announcement of an internal Microsoft decision rather than an invitation for comments ahead of time.
I’m sure Microsoft’s strategy department would disagree with you. As a c# devotee - I get that you’re upset. And you may want to update your priors on where c# sits in Microsoft’s current world. But I think it’s a mistake to imagine this isn’t a well reasoned decision.
They can disagree if they want but as a career-long Microsoft developer they can't fool me that easily. I'm not even complaining, I'm just stating a fact that high-level steering decisions like this are made in Teams meetings between Microsoft employees, not in open discussion with the community. It's the same in .NET, which is a very open source project whose highest-level decisions are, nonetheless, made in Teams meetings between Microsoft employees and then announced to the public. I'm fine with this but let's not kid ourselves about it.
That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).
> The larger open source world doesn't matter for this decision
It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.
To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.
It would have a substantial risk for the typescript project. Many people would see it as an unwanted and hostile push of a Microsoft technology on the typescript community.
And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.
If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.
Okay, not to be petty here but, it's important to note that on his GitHub he did not star the dotnet repository but has starred multiple go repos and multiple other c++ and TS repos
It’s always the same response, c# was crappy but it’s not crappy anymore. Well guess what, Go has been not crappy for a lot longer than C# has been not crappy, maybe that’s part of the reason people like it more.
.NET executables requires a runtime environment to be installed.
Go executables do not.
TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.
Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.
.NET has been able to build a self contained single file executable for both the JIT and AOT target for a quite some time. Java also does not require the user to install a runtime. JLink and JPackage have both been around for a long time.
Maybe some other runtimes do this or it has been changed, but in the past self-contained singe-file .NET deployment just meant that it rolled all the files up during publishing and when you run it, it extracted them to a folder. Not really like a single statically linked executable.
Ok. Credit where credit is due, but considering the sheer value of having the next general of programmers comfortable with .net, Microsoft *should* chip in more.
Hasn't Microsoft largely hitched their horse to Go these days, though (not just this project)? They even maintain their own Go compiler: https://github.com/microsoft/go
It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.
I'm sure Microsoft could find the money to do a lot of different things. But why that instead of the infinite alternatives that the money could be spent on instead?
"any reason Microsoft isn't sponsoring a solid open source game engine"
I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers
Unless I missed Unity sorting a ton of stuff out, I assume they're going to have to sell themselves off for parts at some point after the runtime fee fiasco that was supposed to make them profitable lead to developers being angry or outright leaving the ecosystem. My assumption if that happens unless the DOJ gets involved for some reason is MS buys it for this reason.
> we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?
Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.
I find the discussion about the choice quite interesting, and many points are very convincing (like the GC one). But I am a bit confused about the comparison between Go and C#. Both should meet most of the criteria like GC, control over memory layout/allocation and good support for concurrency. I'm curious what the weaknesses of C# for this particular use case were that lead to the decision for Go.
Anders is answering this in the Video. Go is the lower level and also closer to Javascript's programming style. They didn't want to fully object oriented for this project.
C# is fine. But last I checked, the AOT compilation generates a bunch of .dll files, which are not suitable for a CLI program like Go's zero dependencies binary.
No. This is normal native compilation mode. As you reference more features from either the standard library or the dependencies, the size of the binary will grow (sometimes marginally, sometimes substantially if you are heavily using struct generics with virtual members), but on average it should be more scalable than Go’s compilation model. Even JIT-based single-file binaries, with trimming, take about ~13-40 MB depending on the task. The runtime itself AFAIK, if installed separately, is below 100MB (installing full SDK takes more space, which is a given).
Spending ages slamming your head on your keyboard because you get a dll error or similar running a .NET app and just can't find the correct runtime version / download is a great past time.
then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work
yeh, great ecosystem
at least a Go binary runs 99.99999% of the time when you start it.
So when can we expect Go support in Visual Studio? I am sold by Anders' explanation that Go is the lowest language you can use that has garbage collection!
Personally, I want to know why Go was chosen instead of Zig. I think Zig is really more WASM-friendly than Go, and it's much more similar to JavaScript than Rust is.
Go has buildmode=c-shared, which compiles your program to a C-style shared library with C ABI exports. Any first call into your functions initializes the runtime transparently. It's pretty seamless and automatic, and it'll perform better than embedding a WASM engine.
We are sure there will be a way to embed via something like WebAssembly, but the goal is to start from the IPC layer (similar to LSP), and then explore how possible it will be to integrate at a tighter level.
Esbuild is distributed as a series of native executables that are selectively installed by looking at arch and platform. Although you can build esbuild in wasm (and that's what you use when you run it in the browser), what you actually run from .bin in the CLI is a native executable, not wasm.
Why embed it if you can run a process alongside yours and use efficient IPC? I suppose the compiler code should not be in some tight loop where an IPC boundary would be a noticeable slowdown. Compilation occurs relatively rarely, compared to running the compiled code, in things like Node / Deno / Bun / Jupyter. LSPs use this model with a pretty wasteful XML IPC, and they don't seem to feel slow.
Because running a parallel process is often difficult. In most cases, the question becomes:
So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?
Not vaguely. Not hand wave "just launch it". How exactly do you do it?
How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
So now each kernel process has to manage another process, which it talks to via IPC?
...
Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
> So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC?
Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)
> How do you shut it down when the 'host' process dies?
From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.
However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.
> How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.
On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.
If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.
> How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.
It will not conflict with other applications that do the same thing.
> When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
> So now each kernel process has to manage another process, which it talks to via IPC?
Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.
Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.
> Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.
I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.
Thanks for chiming in with these details, but I would just like to say:
> It will not conflict with other applications that do the same thing.
It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
IPC is, by definition intended to be accessible by other processes.
Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
However, you'd have to rely on that mechanism being built into the typescript compiler service.
...ie. it's a bit complicated right?
Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
> Not conflicting is not a property of parallel binary deployment
I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.
> IPC is, by definition intended to be accessible by other processes
Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.
A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.
This seems like an oddly specific take on this topic.
You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.
My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.
You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.
I don't think arguing otherwise is defensible or reasonable.
Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.
I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.
TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.
It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.
I've had to worry about it.
I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.
> It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.
There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)
> For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
> Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
> IPC is, by definition intended to be accessible by other processes.
Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)
> Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
> However, you'd have to rely on that mechanism being built into the typescript compiler service.
> ...ie. it's a bit complicated right?
> Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.
(edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)
That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.
And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.
Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.
TypeScript compiles to JavaScript. It means both `tsc` and the TS program can share the same platform today.
With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.
This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.
This is awesome. Thanks to you and all the TypeScript team for the work they put on this project! Also, nice to see you here, engaging with the community.
Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:
- porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.
- porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.
- a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.
This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.
None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!
Not Daniel, but I've ported a typechecker from PHP to Rust (with some functional changes) and also tried working with the official Hack OCaml-based typechecker (a precursor to Flow).
Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.
I am curious why dotnet was not considered - it should run everywhere Go does with added NativeAoT too, so I am especially curious given the folks involved ;)
(FWIW, It must have been a very well thought out rationale.)
Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?
I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)
Well-optimized JavaScript can get to within about 1.5x the performance of C++ - something we have experience with having developed a full game engine in JavaScript [1]. Why is the TypeScript team moving to an entirely different technology instead of working on optimizing the existing TS/JS codebase?
Well-optimized JavaScript can, if you jump through hoops like avoiding object creation and storing your data in `Uint8Array`s. But idiomatic, maintainable JS simply can't (except in microbenchmarks where allocations and memory layout aren't yet concerns).
In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.
I'd say that our JS game engine codebase is generally idiomatic, maintainable JS. We don't really do anything too esoteric to get maximum performance - modern JS engines are phenomenal at optimizing idiomatic code. The best JS performance advice is to basically treat it like a statically typed language (no dynamically-shaped objects etc) - and TS takes care of that for you. I suppose a compiler is a very different use case and may do things like lean on the GC more, but modern JS GCs are also amazing.
Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.
Note that games are based on main loops + events, for which JITs are optimized, while compilers are typically single run-to-completion, for which JITs aren't.
So this might be a very different performance profile.
*edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.
In other words you write asm.js, which is a textual form of WebAssembly that is also valid Javascript, and if your browser has an asm.js JIT compiler - which it doesn't because it was replaced by WebAssembly.
Our best estimate for how much faster the Go code is (in this situation) than the equivalent TS is ~3.5x
In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I used to work on compilers & JITs, and 100% this — polymorphic calls is the killer of JIT performance, which is why something native is preferable to something that JIT compiles.
Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)
> If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.
"Well optimized Javascript", and more generally, "well-optimized code for a JIT/optimizer for language X", is a subset of language X, is an undefined subset of language X, is a moving subset of language X that is moving in ways unrelated to your project, is actually multiple such subsets at a minimum one per JIT and arguably one per version of JIT compilers, and is generally a subset of language X that is extremely complicated (e.g., you can lose optimization if your arrays grow in certain ways, or you can non-locally deoptimize vast swathes of your code because one function call in one location happened to do one thing the JIT can't handle and it had to despecialize everything touching it as a result) such that trying to keep a lot of developers in sync with the requirements on a large project is essentially infeasible.
None of these things say "this is a good way to build a large compiler suite that we're building for performance".
Please note that compilers and game engines have extremely different needs and performance characteristics—and also that statements like "about 1.5x the performance of C++" are virtually meaningless out-of-context. I feel we've long passed this type of performance discussion by and could do with more nuanced and specific discussions.
Why is the TypeScript team moving to an entirely different technology
A few things mentioned in an interview:
Cannot build native binaries from TypeScript
Cannot as easily take advantage of concurrency in TypeScript
Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.
Who wants to spend all their time hand-tuning JS/TS when you can write the same code in Go, spend no time at all optimizing it, and get 10x better results?
- C++ with thousands of tiny objects and virtual function calls?
- JavaScript where data is stored in large Int32Array and does operations on it like a VM?
If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.
While Go can be considered entirely different technology, I'd argue that Go is easy enough to understand for the vast majority of software developers that it's not too difficult to learn.
It had been very explicitly designed with this goal. The idea was to make a simpler Java which is as easy as possible to deploy and as fast as possible to commute and by these measures is a resounding success.
Well-optimized JS isn't the only point of operation here. There's a LOT of exchange, parsing and processing that interacts with the File System and the JS engine itself. It isn't just a matter of loading a JS library and letting it do its thing. Every call that crosses the boundaries from JS runtime to the underlying host environment has a cost. This is multiplied across potentially many thousands of files.
Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.
I'm not sure how it is in Construct, but IME "well-optimized" JavaScript quickly becomes very difficult to read, debug, and update, because you're relying heavily on runtime implementation quirks and micro-optimizations that make a hash of code cleanliness. Even you can hit close to native performance, the native equivalent usually has much more idiomatic code. The tsc team needs to balance performance of the compiler against keeping the codebase maintainable, which is especially vital for such a core piece of web infrastructure as TypeScript.
Your JS code is way uglier than their Go code, if you're doing those kinds of shenanigans.
JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).
The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.
Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
- C# Ahead of Time compiler doesn't target all the platforms they want.
- C# Ahead of Time compiler hasn't been stressed in production as many years as Go.
- The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.
- Go has better control of low level memory layouts.
I'm not involved in the decisions, but don't C# applications have a higher startup time and memory usage? These are important considerations for a compiler like this that needs to start up and run fast in e.g. new CI/CD boxes.
For a daemon like an LSP I reckon C# would've worked.
Yes, in fact that's one of the main reasons given in the two linked interviews: Go can generate "real" native executables for all the platforms they want to support. One of the other reasons is (paraphrasing) that it's easier to port the existing mostly functional JS code to Go than to C#, which has a much more OOP style.
It exists but isn’t the same as a natively compiled binary. A lot gets packed into an AOT binary for it to work. Longer startup times, more memory, etc.
Go’s static binaries are orders of magnitude smaller than .Net’s static binaries. However, you are right, all binaries have some bloat in order to make them executable.
Not when compiled by NativeAOT. It also produces smaller binaries than Go and has better per-dependency scalability (due to metadata compression, pointer-rich section dehydration and stronger reachability analysis). This also means you can use F# too for this instead, which is excellent for langdev (provided you don't use printf "%A" which is incompatible which is a small sacrifice).
What is the cross compilation support for NativeAOT though? This is one of the things that Go shines (as long as you don't use CGO, that seems perfectly plausible in this project), and while I don't think it would be a deal breaker it probably makes things a lot easier.
What is the state of WASM support in Go though? :)
I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.
Seeing that Hejlsberg started out with Turbo Pascal and Delphi, and that Go also has a lot of Pascal-family heritage, he might hold some sympathy for Go as well...
Yes there is that irony, however when these kind of decisions are made, by folks with historical roots on how .NET and C# came to be, then .NET team cannot wonder why .NET keeps lagging adoption versus other ecosystems, on companies that aren't traditional Microsoft shops.
Pure speculation, but C# is not nearly the first class citizen that go binaries are when you look at all possible deployment targets. The “new” Microsoft likely has some built-in bias against “embrace and extend” architectural and business decisions for developers. Overall this doesn’t seem like a hard choice to me.
If you are a rust devotee, you can use https://github.com/FractalFir/rustc_codegen_clr to compile your rust code to the same .NET runtime as C#. The project is still in the works but support is said to be about 95% complete.
I don't understand what Anders' past involvement with C# has to do with this. Would the technical evaluation be different if done by Anders vs someone else?
C# and Go are direct competitors and the advantages of Go that were cited are all features of C# as well, except the lack of top level functions. That's clearly not an actual problem: you can just define a class per file and make every method static, if that's how you like to code. It doesn't require any restructuring of your codebase. There's also no meaningful difference in platform support, .NET AOT supports Win/Mac/Linux on AMD64/ARM i.e. every platform a developer might use.
He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.
I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.
Yea, I came here to say the same thing. Anders' reasons for not going with C# all seem either dubious or superficial and easily worked around.
First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.
Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.
As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.
It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.
I wonder if there's just some cultural / generational stuff happening there too. The fact that the TS compiler is all about compiling a highly complex OOP/functional hybrid language yet is said to use neither objects nor FP seems rather telling. Hejlsberg is famous for designing object oriented languages (Delphi, C#) but the Delphi compiler itself was written largely in assembly, and the C# compiler was for a very long time written in C++ iirc. It's possible that he just doesn't personally like working in the sort of languages he gets paid to design.
There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.
It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.
> And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.
NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).
In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.
Yes, when the author of the language feels it is unfit for purpose, it is a different marketing message than a random dude on the Internet on his new startup project.
I write a lot of Go and a decent amount of TypeScript. Was there anything you found during this project that you found particularly helpful/nice in Go, vs. TypeScript? Or was there anything about Go that increased the difficulty or required a change of approach?
I'd be curious to hear about the politics and behinds the scenes of this project. How did you get buy-in? What were some of the sticking points in getting this project off of the ground? When you mention that many other languages were used to spike the new compiler, were there interesting learnings?
I feel like you'll need to provide a wasm binary for browser environments and maybe as a fallback in node itself. Last time I checked, Go really struggles to perform when targeting wasm. This might be the only reason I'd like to see it in Rust but I'm still glad you went with Go.
Honestly, the choice seems fine to me: the vast majority of users are not compiling huge TypeScript projects in the browser. If you're using Vite/ESBuild, you're already using a Go-based JS toolchain, and last I checked Vite was pretty darn popular. I don't suspect there will be a huge burden for things like playground; given the general performance uplift that the Go tsc implementation already gets, it may in fact be faster even after paying the Wasm tax. (And even if it isn't, it should be more than fine for playground anyways.)
I am not a Vite expert, however, when running Vite in dev mode, I can see two things:
- There is an esbuild process running in the background.
- If I look at the JavaScript returned to the browser, it is transpiled without any types present.
So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.
I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.
This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering
It’s fine, if it’s 2x faster after being feature complete, I don’t really mind. It still is a free speedup to all existing code-bases. Developers don’t need to anything than install the latest version of TypeScript I presume
One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?
Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?
Edit: Clarified to show that I indicate updating the original compiler.
It's unlikely that you would get much performance benefit from AOT compiling a TypeScript codebase. (At least not with a ton of manual optimization of the native code, and if you're going to do that, why not just rewrite in a native-first language?)
JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but
(a) nobody's done that
(b) it still wouldn't be as fast as native, or much faster than JIT
(c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()
So basically, TypeScript as a language doesn't allow compiling to as as efficient machine code as Golang? (Edit) And I assume it's not practical to alter the language in a way that this kind of information can be added. (Such as adding a typed version of JSON.parse()).
Not sure if it does but the video linked in the post might answer your question? I think he is compiling vscode which includes Monaco editor which is where they are getting 10x faster stat. (I might be wrong here.) [0]
This might be an oddly specific question, but do you think performance improvements like this might eventually lead to features like partial type argument inference in generics? If I recall correctly off the top of my head, performance was one of the main reasons it was never implemented.
What is the forward paths available for efforts like the TS Playground under Typescript 7 (native)?
One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?
We anticipate that we will eventually get a playground working on the new native codebase. We know we'll likely compile down to WebAssembly, but a lot of how it gets integrated will depend on what the API looks like. We're currently giving a lot of thought to that, but we have good ideas. https://github.com/microsoft/typescript-go/discussions/455
This is very exciting! I'm curious if this move eventually unlocks features that have been deemed too expensive/slow so far, e.g. typing `ReactElement` more accurately, typing `TemplateStringsArray` etc
Considering Go is the only language with a garbage collector out of the three languages you mentioned, I'm not sure how you reach the conclusion they're all as close to the metal.
C and Rust both have predictable memory behaviour, Go does not.
> When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.
It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/
You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.
There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).
Go isn't that bad in terms of memory predictability to be honest. It generally has roughly 100% overhead in terms of memory usage compared to no GC. This can be reduced by using GOGC env variable, at the cost of worse performance if not careful.
Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.
My question has to do with Ryan's statement:
> We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript
I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.
Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?
> as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem
Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...
Now, it pales in comparison to Python, but still...
I'm no more thrilled than you at the cost of running ESLint, but using a high-level language doesn't need to mean being wasteful of resources.
TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft
Go is an extremely strange choice, given the ecosystem you're targeting. I've got quite a bit of experience in it, TS, Rust and C++. I'd pick any of those for productivity and (in the case of C++ and Rust, thread-safety) over Go, simply because Go's type system is so impoverished.
From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.
Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)
Fast dev tools are awesome and I am glad the TS team is thinking deeply about dev experience, as always!
One trade off is if the code for TS is no longer written in TS, that means the core team won’t be dogfooding TS day in and day out anymore, which might hurt devx in the long run. This is one of the failure modes that hurt Flow (written in OCaml), IMO. Curious how the team is thinking about this.
Hey bcherny! Yes, dog-fooding (self-hosting) has definitely been a huge part in making TypeScript's development experience as good as it is. The upside is the breadth of tests and infrastructure we've already put together to watch out for regressions. Still, to supplement this I think we will definitely be leaning a lot on developer feedback and will need to write more TypeScript that may not be in a compiler or language service codebase. :D
Interesting! This sounds like a surprisingly hard problem to me, from what I've seen of other infra teams.
Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?
Ultimately the solution has to be breaking the browser monopoly on JS, via performance parity of WASM or some other route, so that developers can dogfood in performant languages instead across all their tooling, front end, and back end.
First, this thread and article have nothing to do with language and/or application execution performance. It is only about the tsc compiler execution time.
Second, JavaScript already executes quickly. Aside from arithmetic operations it has now reached performance parity to Java and highly optimized JavaScript (typed arrays and an understanding of data access from arrays and objects in memory) can come within 1.5x execution speed of C++. At this point all the slowness of JavaScript is related to things other than code execution, such as: garbage collection, unnecessary framework code bloat, and poorly written code.
That being said it isn't realistic to expect measurably significant faster execution times by replacing JavaScript with a WASM runtime. This is more true after considering that many performance problems with JavaScript in the wild are human problems more than technology problems.
Third, WASM has nothing to do with JavaScript, according to its originators and maintainers. WASM was never created to compete, replace, modify, or influence JavaScript. WASM was created as a language ubiquitous Flash replacement in a sandbox. Since WASM executes in an agnostic sandbox the cost to replace an existing runtime is high since an existing run time is already available but a WASM runtime is more akin to installing a desktop application for first time run.
How do you reconcile this view with the fact that the typescript team rewrote the compiler in Go and it got 10x faster? Do you think that they could have kept in in typescript and achieved similar performance but they didn't for some reason?
This was touched on in the video a little bit—essentially, the TypeScript codebase has a lot of polymorphic function calls, and so is generally hard to JIT optimize. JS to Go therefore yielded a direct ~3.5x improvement.
The rest of the 10x comes from multi-threading, which wasn't possible to do in a simple way in the JS compiler (efficient multithreading while writing idiomatic code is hard in JS).
JavaScript is very fast for single-threaded programs with monomorphic functions, but in the TypeScript compiler's case, the polymorphic functions and opportunity for parallelization mean that Go is substantially faster while keeping the same overall program structure.
I have no idea about the details of their test cases. If they had used an even faster language like Cobol or Fortran maybe they could have gotten it 1,000,000x faster.
What I do know is that some people complain about long compile times in their code that can last up to 10 minutes. I had a personal application that was greater than 60k lines of code and the tsc compiler would compile it in about 13 seconds on my super old computer. SWC would compile it in about 2.5 seconds. This tells me the far greater opportunity for performance improvement is not in modifying the compiler but in modifying the application instance.
Are you looking for non-browser performance such as 3d? I see no case that another language is going to bring performance to the DOM. You'd have to be rendering straight to canvas/webgl for me to believe any of this.
They should write a typescript-to-go transpiler (in typescript) , so that they can write their compiler in typescript and use typescript to transpile it to go.
The issue with Flow is that it's slow, flaky and has shifted the entire paradigm multiple times making version upgrades nearly impossible without also updating your dependencies, IF your dependencies adopted the new flow version as well. Otherwise you're SOL.
As a result the amount of libraries that ship flow types has absolutely dwindled over the years, and now typescript has completely taken over.
Our experience is the opposite, we have a pretty large flow typed code base, and can do a full check in <100ms. When we converted to TS (decided not to merged) we saw typescript was in the multiple minute mark. It’s worth checking out LTI and how the typing on boundaries, enables flow to parallelize and give very precise error messages compared to TS. The third party lib support is however basically dead, except the latest versions of flow are starting to enable ingestion of TS types, so that’s interesting.
I notice this time and time again: projects start with a flexible scripting language and a promise that the performance will be sufficient. I mean, JS is pretty performant as scripting languages go and it is hard to think of any language runtimes that get more attention than the browser VMs. And generally, 90% of the things people do will run sufficiently fast in that VM.
Yet projects inevitably get to the stage where a more native representation wins out. I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
It makes me think I should be starting any project I have in the lowest level representation that allows me some ergonomics. Maybe more reason to lean into Zig? I don't mean for places where something like Rust would be appropriate. I mean for anything I would consider using a "good enough" scripting language.
It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun). I mean, the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me. And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go. And with LLMs and language servers, there is decreasing utility in familiarity with a language to be productive.
It feels like the advantages of scripting languages are being eroded away. If I am planning a career "vibe coding" or prompt engineering my way into the future, I wonder how reasonable it would be to assume I'll be doing it to generate lower level code rather than scripts.
> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
I think the Prisma case is a bit of a red herring. First, they are using WASM which itself is a a low-level representation. Second, the performance gains appear primarily in avoiding the marshalling of data from JavaScript into Rust (and back again I presume). Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
> First, they are using WASM which itself is a a low-level representation.
WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.
> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman
> As for the "compilers are special" reasoning, I don't ascribe to it.
Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.
> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]
I'm not denying the facts of the matter, I am denying the conclusion. The circumstances of the situation are relevant. Marshalling cost across IPC boundaries come into play in every single possible situation regardless of language. It is why shared memory architectures exist. It doesn't matter what language is on the other side of the IPC, if the performance gained by using a separate process is not greater than the cost of the communication then you should avoid the IPC. One way to avoid that cost is to share the memory. In the case of code already running in a JavaScript VM a very easy way to share the memory means you do the processing in JavaScript.
That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.
And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.
The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
The fact that many software products are moving to lower-level languages is not a general point in favour of lower-level languages being somehow better—rather, it simply aligns with general directions of software evolution.
1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.
2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.
Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience. Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption.
(And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)
> The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions.
Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.
> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Arguably Go is a scripting language designed for exactly that purpose.
I wouldn't think choosing a native language over a scripting language is a "gamble" but I suppose that all depends on ability and risk tolerance. I think it would be relatively easy to develop using Rust, Go, Zig, etc.
I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).
Rather than fixating on this single Prisma example, I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages.
First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.
Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.
This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s
> your larger point which seems to be that all greenfield projects are necessarily best suited to low level language
That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.
So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.
> Low level languages tend to have a higher barrier to entry,
I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
> That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements.
The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.
> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.
I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.
Ah, now we're at the dictionary definition level. So let's check Google:
Inevitable:
as is certain to happen; unavoidably.
informal
as one would expect; predictably.
"inevitably, the phone started to ring just as we sat down"
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?
It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.
edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?
I wrote 362 words on why language rewrites are a faulty indicator of language quality with multiple examples and anecdotes, and you hyper-fixated on the very first sentence of my comment, instead of addressing the substance of my argument. In what alternate universe is that a good faith argument? If you were truly arguing in good faith you'd restate your position in whichever way you'd like your argument represented, and then proceed to respond to something besides the first sentence.
> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?
If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.
i don't think this speaks to the general reasons someone would rewrite a mid- or low-level project in a high-level language, so much as to the special treatment JS/TS get. yes, your data model being the default supported, and everything else in the world having to serialize/deserialize to accommodate that, slows performance. in other words, this is just a reason to use the natively-supported JS/TS, still very much the favorite children of browser engines, over the still sort of hacked-in Rust.
I think it's smart to start with a high level language which should reduce development time, prove the worth of the application, then switch to a lower level language later.
What was that saying again? Premature optimisation is the root of all evil
A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.
> I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
Software never gets rewritten in a higher level language, but software is constantly replaced by alternatives. First example that comes to mind is Discord, an Electron app that immediately and permanently killed every other voice client on the market when it launched.
Yes, scripting replacements often usurp existing ossified alternatives. And there is some truth that a higher level language gave some leverage to the developers. That is why I mentioned the advent of LLM based coding assistants and how this may level the playing field.
If we assume that coding assistants continue to improve as they have been and we also assume that they are able to generate lower level code on par with higher level code, then it seems the leverage shifts away from "easy to implement features" languages to "fast in most contexts" languages.
Only time will tell, of course. But I wonder if we will see a new wave of replacements from Electron based apps to LLM assisted native apps.
I game very little these days, but have run mumble, ventrillo and teamspeak in the past and the problem was always the friction in onboarding people onto them, you’d have to exchange host, port, password at best, or worse, explain how to download, install and use.
Discord can run from a browser, making onboarding super easy. The installable app being in Electron makes for minimal (if any) difference between it and the website.
In summary, running in the web browser helps a lot, and Electron makes it very easy for them to keep the browser version first class.
As an added bonus, they can support Linux, Windows and macOS equally well.
I would say it helps as without Electron, serving all the above with equal feature parity just would have been too expensive or slow and perhaps it just wouldn’t have been as frictionless for all types of new users like it is.
Inevitably? Well, the promise of using something less efficient in terms of performance is that it will be more efficient in terms of development. Many times projects fail because they optimize too early and never built the features they needed to or couldn’t iterate fast enough to prove value and die. So if the native version is better but failed, it’s not so inevitable that it will get to that stage.
Right, which is my point about LLM code assistants. If you did have two cases in the past: native but slow to add features so the project eventually dies vs. scripted but performance is bad enough it eventually needs to be rewritten. (Of course, this is a false dichotomy but I'm playing into your scenario).
Now we may have a new case: native but fast to add features using a code assist LLM.
If that new case is a true reflection of the near future (only time will tell) then it makes the case against the scripted solution. If (and only if) you could use a code assist LLM to match the feature efficiency of a scripting language while using a native language, it would seem reasonable to choose that as the starting point.
That’s an interesting idea. It’s amazing how far we’ve come without essentially any objective data on how much these various methodologies (e.g. using a scripting language) improve or worsen development time.
The adoption of AI Code Assistance I am sure will be driven similarly anecdotally, because who has the time or money to actually measure productivity techniques when you can just build a personal set of superstitions that work for you (personally) and sell it? Or put another way, what manager actually would spend money on basic science?
> the lowest level representation that allows me some ergonomics
The ergonomics of compiling your code for every combination of architecture and platform you plan to deploy to? It's not fun. I promise.
> my default assumption to use JS runtimes on the server
AWS Lambda has a minimum billing interval of 1ms. To do anything interesting you have to call other APIs which usually have a minimum latency of 5 to 30ms. You aren't buying much of anything in any scalable environment.
> there is decreasing utility in familiarity with a language to be productive.
I hope you aren't planning on making money from this code. Either way, have fun debugging that!
> the advantages of scripting languages are being eroded away.
As long as scripting languages have interfaces which let them access C libraries either directly or through compiled modules they will have strong advantages. Just having a CLI where you can test out ideas and check performance is massively powerful and I hate not having it in any compiled project. Go has particularly bad ergonomics here as writing test cases are easy but exploring ideas is not due to it's strictness down to even the code styling level.
The JS `tsc` type checks the entire 1.5 million line VS Code source in 77s (non-incremental). 7s is a lot better and will certainly imrpove DX - which is their goal - but I don't see how that's "insufficient".
The trade-off is that the team will have to start dealing with a lot of separate issues... How do tools like ESLint TS talk to TSC now? How to run this in playground? How to distribute the binaries? And they also lose out on the TS type system, which makes their Go version rely a little more on developer prowess.
This is an easy choice for one of the most fundamental tools underlaying a whole ecosystem, maintained by Microsoft and one of the developers of C# itself, full-time.
Other businesses probably want to focus on actually making money by leading their domain and easing long-term maintenance.
Hejlsberg seemed quite negative when it came to cross platform AOT compiled C# in several comments he's made, hinting at problems with both performance and maturity on certain platforms.
This was also surprising to me – C# is a really awesome and modern language.
I happened to be doing a lot of C# and .NET dev when all this transition was happening, and it was very cool to be able to run .NET in Linux. C# is a powerful language with great and constantly evolving ideas in it.
But then all the stuff between the runtimes, API surfaces, Core vs Framework, etc all got extremely confusing and off-putting. It was necessary to bring all these ecosystems together, but I wonder if that kept people away for a bit? Not sure.
If I recall in an article from a while back, the idea was originally rust, but the current compiler design had lots of references shared references that would make the port to rust a lot of work.
Personally, Rust only makes sense in scenarios that automatic memory management of any kind is either unwanted, or it is a quixotic battle making the target group think otherwise.
OS kernels, firmware, GPGPU,....
If it is the ML inspired type system, there are plenty of options among compiled managed languages, true Go isn't really on that camp, but whatever.
I'd love a language that is a GC'd like go, but with the ML inspired type system, and still an imperative language. OCaml seems to be the closest thing to Rust in that regard, but it's not imperative.
Nim is pretty close to that for me. It’s more pascal-ish inherited but has a sophisticated type system including case types similar to ML sum types and compile time.
Or possibly you want to use a language you're familiar with in adjacent spaces (eg tools) or you want to tackle concurrency bugs more directly. There is more to rust than it's
He also mentioned doing a line-for-line port. Assuming you could somehow manage that, you'd probably end up with something slower than JS (not entirely a joke). I'm a rust fanboy, but have to concede that Go was the be t choice here.
If it was a fresh compiler then the choice would be more difficult.
>Pity that they didn't go with AOT compiled .NET, though.
I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
I didn't say it was very performance critical, go and c# are both good enough for us in this regard. The problem is that, when evaluating the whole thing, they decided against c#, that is problematic here.
But they not stated it is <because> of C#'s performance, so I don't think this is THAT problematic.
But I agree that it would be fine to see them dogfeeding on their language for such a massive project, and a project that is even related to TypeScript (as it inspired it in some features), it is a shame they don't do it, but it is also the case for many of their projects (like, they are even pushing react native for apps nowadays), so I think at some level it's really fine.
> But they not stated it is <because> of C#'s performance
But I just said my point is not about performance at all! It is about the whole package. Performance of c# and go are both enough for my usecase, same for java and c obviously. They just told us that they don't think the whole package makes sense, and disowned the AOT compilation.
But you said:
> I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
Which made me naturally think your point was, indeed, about performance.
Although as it appears to be, I'm wrong, so it's fair enough.
There are some external projects that have tried to port tsc to native. stc[0], for instance, was one. Iirc it started out in Go since it had a more comparable type system (they both use duck typing) making it easier to do one-to-one conversions of code from one language to the other. I’m not totally sure why it ended up pivoting to rust.
> I love that they picked Go instead of the fashion to go Rust
This seems super petty to me. Like, if at the end of the day you get a binary that works on your OS and doesn’t require a runtime, why should you “love” that they picked one language over another? It’s exactly the same outcome for you as a user.
I mean, if you wanted to contribute to the project and you knew go better than rust, that would make sense. But sounds like you just don’t like rust because of… reasons, and you’re just glad to see rust “fail” for their use case.
It's not just a pity, it's very surprising. In my eyes Go is a direct competitor of C#. Whenever you pick Go for a project, C# should have been a serious consideration. Hejlsberg designed C# and that a team that he's an authority figure in would opt to use Go, a language which frankly I would not consider to build a compiler in is astounding.
Not saying that in a judgemental way, I'm just genuinely surprised. What does this say about what Hejlsberg thinks of C# at the moment? I would assume one reason they don't pick C# is because it's deeply unpopular in the open source world. If Microsoft was so successful in making Typescript popular for open source work, why can't they do it for C#?
I have not opted to use C# for anything significant in the past decade or so. I am not 100% sure why, but there's always been something I'd rather use. Whether that's Go, Rust, Ruby or Haskell. I always enjoyed working in C#, I think it's a well designed and powerful language even if it never made the top of my list recently. I never considered that there might be something so fundamentally wrong with it that not even Hejlsberg himself would use it to build a Typescript compiler.
- C# is bytecode-first, Go targets native code. While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it. Go also has somewhat better control over data layout. They wanted to get as low-level as possible while still having garbage collection.
- This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style. This suits Go well while a C# port would have required more restructuring.
This is shockingly out-of-date statement by Anders.
I'm not sure what's going on, I guess he's just not involved with the runtime side of .NET at all to actually know where the capability sits circa 2024/2025. But really, it's a terrible situation to be in. Especially just how worse langdev UX in Go is compared to C#, F# or Rust. No one would've batted an eye if either of those was used.
Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.
> Go also has somewhat better control over data layout
How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.
> This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.
C# has been consistently moving into that direction by taking more and more inspiration from F#.
The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.
Chances are it was just personal preference of the team and decades of arguing about language design have worn out Anders Hejlsberg. I don't think structural typing alone is enough of an argument to justify the choice over Rust. Maybe the TS team thought choosing Go would have better optics. Well, they won't have it both ways because clearly this decision in my opinion is short-sighted and as someone aptly pointed on twitter they will be now beholden to Google's control over Go should they ever need compiler to support a new platform or evolve in a particular way. Something they would've gotten easily with .NET.
On the topic of preference, this thread has really shown me that there is a HUGE preference for a native-aot gc language that is _not_ Go. People want AOT because of the startup and memory characteristics, but do not want to sacrifice language ergonomics. C# could fill that gap if Microsoft would push it there.
Doubt is human, but it isn’t always warranted. In C++ can use a concurrent, completely pause‐free garbage collector, where the programmer decides which data is managed by the GC. This enables code optimizations in ways that aren’t possible in C# and Java.
You realize that is literally not the same thing? I said equivalent code. The whole reason of using a managed language with GC is to not think about those things because they eat up thought and development time. Of course the language that will let you hand optimize every little line will eventually be more performant. I really think you’re discounting both C#’s ability to do those things and just how good Java’s GCs are. Anyway, thats not the point.
The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.
Isn't the AOT story for F# pretty meh? AOT + System.Text.Json requires source generation as best I can tell, which F# doesn't support yet (to my knowledge).
In complex projects like this, Go requires manual scripting and build-time code generation. Arguably, writing a small shim project in C# is much easier. You don't exactly do a lot of JSON serialization in a compiler either way. Other than that - F# "just works" and does not require anything extra. It is just IL after all.
NativeAOT story itself is also interesting - I noted it in a sibling comment but .NET has much better base binary size and binary size scalability through stronger reachability analysis, metadata compression and pointer-rich binary sections dehydration at a small startup cost (it's still in the same ballpark). The compiler output is also better and so is whole program view driven devirtualization, something Go does not have. In the last 4 years, .NET's performance has improved more than Go's in the last 8. It is really good at text processing at both low and high level (only losing to Rust).
The most important part here is that TypeScript at Microsoft is a "first-party" customer. This means if they need additional compiler accommodations to improve their project experience from .NET, they could just raise it and they will be treated with priority.
This decision is technically and politically unsound at multiple levels at once. For example, they will need good WASM support. .NET's existing WASM support is considered "decent" and even that one is far from stellar, yet considered ahead of the Go one. All they needed was to allocate additional funding for the ongoing already working NativeAOT-LLVM-WASM prototype to very quickly get the full support of the target they needed. But alas.
I already hinted on BlueSky that they shouldn't wonder why .NET has adoption problems outside the traditional Windows ecosystem, when decisions like these are taken.
C# has become a poor jack of all trades, trying to be Java, Go and F# at the same time and actually being a shity poor version of all of them. On top of that .NET has become a very enterprisey bloatware. In all honesty, I'm not surprised that they went with Go, as it has a clear identity, a clear use-case which it caters for extremely well and doesn't lose focus with trying to be too many other unrelated things at the same time.
Maybe it's time to stop eating everything that Microsoft sales folks/evangelists spoon feed you and wake up to the fact that only because people paid by Microsoft to roll the drum about Microsoft products telling you that .NET and C# is oh so good and the best in everything, maybe it's not actually that credible?
Look at the hard facts. Every single product which Microsoft has built that actually matters (e.g. all their Azure CNCF stuff, Dapr, now this) is using non Microsoft languages and technologies.
You won't see Blazor being used by Microsoft or the 73rd reinvention of ASP.NET Core MVC Minimal APIs Razor Pages Hocus Pocus WCF XAML Enterprise (TM) for anything mission critical.
If not for Microsoft's backing, C# would have died a long time ago. It's just another D, but with a lot more money behind it. It had its chance/momentum, but it failed, and its time has passed. Resurrecting the language now would be very difficult.
It seems it's because AOT is a bit of a second fiddle in the dotnet ecosystem and native is a top priority for their case. After hearing the reasoning ( https://youtu.be/ZlGza4oIleY?si=1GKSX61AF20VQr-G&t=1000 ) I don't blame them for choosing Go.
C# needs an interpreter (.NET runtime) while Go compiles down to a binary. And the toolchain allows you to compile for other architectures fairly easily.
.NET has AOT compilation now. There really is no excuse, especially when you consider that C# has a pretty decent type system and Go has an ad-hoc, informally specified, bug-ridden, slow implementation of half of a decent type system.
If you are wondering why not Rust instead of Go, they outline why Rust was not chosen. This is a port not a reimplementation. Many of the data structures can not easily be ported to Rust, such as Nodes with cyclic dependencies. Check the longer interview here: https://www.youtube.com/watch?v=10qowKUW82U&ab_channel=Michi...
Also, I think the discussion on esbuild's choice of language applies here as well as it has a large similarity. You can find it here on hn
> By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
I haven't looked at the tsc codebase. I do currently use Golang at my job and have used TypeScript at a previous job several years ago.
I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript. Notably, sum types are commonly called out as something especially useful in writing compilers, and when I've wanted them in Golang I've struggled to replace them.
Is there something special about the existing tsc codebase, or does the statement about idiomatic Golang resembling the existing codebase something you could say about most TypeScript codebases?
> I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript.
To be fair, they didn't actually say that. What they said was that idiomatic Go resembles their existing patterns. I'd imagine what they mean by that is that a port from their existing patterns to Go is much closer to a mechanical 1:1 process than a port to Rust or C#. Rust is the obvious choice for a fully greenfield implementation, but reorganizing around idiomatic Rust patterns would be much harder for most programs that are not already written in a compatible style. e.g. For Rust programs, the precise ownership and transfer of memory needs to be modelled, whereas Go and JS are both GC'd and don't require this.
For a codebase that relies heavily on exception handling, I can imagine a 1:1 port would require more thought, but compilers generally need to have pretty good error recovery so I wouldn't be surprised if tsc has bespoke error handling patterns that defers error handling and passes around errors as values a lot; that would map pretty well to Go.
Most TypeScript projects are very far away from compiler code, so that this wouldn't resemble typical TypeScript isn't too surprising. Compilers written in Go also don't tend to resemble typical Go either, in fairness.
I'm not involved in this rewrite, but I made some minor contributions a few years ago.
TSC doesn't use many union types, it's mostly OOP-ish down-casting or chains of if-statements.
One reason for this is I think performance; most objects are tagged by bitsets in order to pack more info about the object without needing additional allocations. But TypeScript can't really (ergonomically) represent this in the type system, so that means you don't get any real useful unions.
A lot of the objects are also secretly mutable (for caching/performance) which can make precise union types not very useful, since they can be easily invalidated by those mutations.
We had Daniel and Anders on the podcast to talk about the how and why of the native port if anyone is looking for an in-depth discussion → https://www.youtube.com/watch?v=ZlGza4oIleY
I'm really surprised by this visceral reaction to not choosing Rust. Go is a great language and I'd choose it for a majority of projects over Rust just based off of the simplicity of the language and the ability to spin up developers on it quickly. Microsoft is a big corporation.
Because of its truly primitive type system, and because Microsoft already has a much better language — C#, which is both faster and can be more high level and more low-level at the same time, depending on your needs.
I am a complete nobody to argue with the likes of Hejlsberg, but it feels like AOT performance problems could be solved if tsc needed it, and tsc adoption of C# would also help push C#/.NET adoption. Once again, Microsoft proves that it's a bunch of unrelated companies at odds with each other.
This is not "the main reason", lol, it was never stated as such. The type system could be way more powerful and, having the same general features they would probably had still picked it up.
What realistic contender doesn't have all the same general features as Go? It doesn't exactly have many to choose from, none of them particularly esoteric, and most of them bare necessities required of any language.
Let's be real: You can absolutely write "Go-style" code in just about any language that might have been considered for this. But you wouldn't want to, as a more advanced type system enables entirely different idioms, and it is in bad faith to other developers (including future you) to stray too far from those idioms. If ignoring idioms doesn't sound like a bad idea on day one, you'll feel the hurt and regret soon enough...
Go was chosen because the idioms are generally in alignment with their needs and those idioms are wholly dependent on the shape of its type system.
It was stated from the angle of wanting to ship software sometime this century.
But there is probably some truth in what you say as well. Footguns are no doubt refreshing after being engrossed in Typescript (and C#) for decades. At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.
Which, I suppose, is why industry itself keeps ending up taking that to the extreme, cycling between static and dynamic typing over and over again.
> At some point you start to notice that your tests end up covering all the same cases as your advanced types
I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests, they either tell some objective truths about your code in compile time (thus reducing the natural need for specific tests) or your type system is simply useless.
Either way, I don't think the "industry" is a person that is balancing itself in a pendulum, there are more things under the sun than we can count, and millions of individuals in their everyday projects may not reason things this way, and instead just chose to "well, person X said this language is more maintainable and readable, and I trust X, so I'll use it" (which is a rational thing to do to some extent).
> I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests
At the extreme end of the spectrum that starts to become true. But the languages that fill that space are also unusable beyond very narrow tasks. This truth is not particularly relevant to what is seen in practice.
In the realm of languages people actually use on a normal basis, with their half-assed type systems, a few more advanced concepts sprinkled in here and there really don't do anything to reduce the need for testing as you still have to test around all the many other holes in the type system, which ends up incidentally covering those other cases as well.
In practice, the primary benefit of the type system in these real-world languages is as it relates to things like refactoring. That is incredibly powerful and not overlapped by tests. However, the returns are diminishing. As you get into increasingly advanced type concepts, there is less need/ability to refactor on those touch points.
Most seem to agree that a complete type system is way too much (especially for general purpose programming), and no type system is too little; that a half-assed type system is the right balance. However, exactly how much half-assery is the right amount of half-assery is where the debate begins. I posit that those who go in deep with thinking less half-assery is the way eventually come to appreciate more half-assery.
Hmmm, I think this is an interesting discussion. There's many sides I need to respond here, maybe I will not be able to cover everything but here I go.
See, I fundamentally disagree that those languages are "unusable beyond very narrow tasks", because I never stated that only a complete and absolutely proven type system can provide those proofs. In fact, even a relatively mid-tier (a little bit above average) type-system like C#'s can already provide enormous benefits in this regard. See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects, in C# you don't have to do this (because the type system dictates the shape). You also have to be very careful around possibly null objects and values, which in a language with "proper" nullable types (and support from it in the type system and static checkers) like C# can be lowered vastly (if you use the resource, naturally). C# is also a language that "brings the types into runtime" through reflection, so it will even bring you things that you don't need to test in your code (only when developing the library) like reflection for example (you will not see things that are meant to assert shapes, like 'zod' or 'pydantic' in C# or other mid-tier typed languages for example). C#'s type system also proves many things about the safety of your code, for example you basically never need to test your usage of Spans, the type system and static analysis will already rule out most problematic usages of those things. You also never need to test if your int is actually a float because some random place in your code it was set to be so (like in JS), you also never need to test against many other basic assumptions even an extremely basic type system would give you (even Go's one).
This is to say that, basically, this don't hold true for relatively simple type systems. I'm also yet to see this holding true for more advanced ones, for example: Rust is a relatively well used language for a lot of low-level projects. I never saw someone testing (well bounded safe) rust code for basic shapes of types, nor for the conclusions the type system provides when writing on it. For example, testing if the type system was really able to catch that ownership transference happening here, of it is really safe to assume that there's only one mutable reference to that object after you called that method, or if the destructor of the object is really running in the end of the scope of the function, or even if the overly complex associated type result was actually what you meant it to be (in fact, if you would ever use those complicated types, it would be precisely to have very strong compile-time guarantees that both a test would not be able to cover for -- entirely, and that you would not write unit tests specifically for in the first place). So I don't think it is true that you need a powerful type system to see the reduction in tests that you would need to write in a completely dynamically typed language, nor I think it is true when you start having really powerful type constructs, that you will come to this conclusion """start to notice that your tests end up covering all the same cases as your advanced types""". I also don't think that you need to go to the extreme of this spectrum to see those benefits, they appear gradually and increase gradually as you move towards the end (when you end up with more extremely uncommon things like dependent typing, refinement types or effect systems).
I also certainly don't agree that it does matter that "most people" think or don't think about powerful type systems and the languages using them, it matters more that the right people are using them, people that want to be benefitted from this, than the everyday masses (this is another overly complex disccussion tho).
And while I can understand the feelings you have towards the "low end of half-assery type systems", and even agree to a certain reasonable degree (naturally, with my own considerations), I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason). It is enough to recognize that a half-assery type-system usually gets the job done and that's it, completely fine and okay, it may even be faster to write, instead of trying to justify that we should "pursue primitive type systems" because of the fact that we can do things well on them. Maybe I'm digressing to much, it's hard to respond to this comment in a satisfactory manner.
>> I don't think the "industry" is a person
> Nobody does.
Yeah, this was not a very productive point of mine, sorry.
> I fundamentally disagree that those languages are "unusable beyond very narrow tasks"
Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.
The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.
> See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects
Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.
I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.
> it matters more that the right people are using them
Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.
> I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason).
Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?
Overly expressive type systems have way more potential for footguns than simple type systems. In fact, I would say that overly expressive type systems make it easy to create unmaintainable code (still waiting on this showstopping bug which nobody can debug because it uses overly expressive types in TS: https://github.com/openapi-ts/openapi-typescript/issues/1769)
I don't think TypeScript is an example of what people would call a "properly expressive type system". Sure, it is very expressive, but it is made to cover all the gaps JavaScript as a language has in a generally type safe manner, and this calls for an EXTREMELY complex and open type system, much more than most languages would ever have, so I don't think this is really appliable as an example.
The gap between maintainable code and unmaintainable one sits between the chair and the screen, not in the type system of the language the person is using, the language merely makes that person more or less able to encode more things in specific places that can become unmaintenable (and anecdotally, most unmaintenable code I know don't even use complex type system features, it's just plain old messy state mutating things scatered all around).
I am not sure if we see the same thread. There is one reaction from "Rust" dev (who seems have a very new account on github) on why not rust. Most of the others seem to be from C# side.
The pattern also seems to be the same on reddit thread. There is one post about why not rust, equally (or more depending how you weigh) is how other people react to this news.
What is weird is how much people talk about how other people react. Modern social media is weird
There's at least 3 top-level threads criticizing the decision not to rewrite in Rust. Including a RIR banner ad posted in the replies.
Holy Language Wars are a spectator sport as old as the internet itself. It's normal to comment on one side fighting another. What's weird is pretending not to see the fighting
After years of PHP, I came to typescript nearly 4 years ago (for web front and backend development). All I can say is that I really enjoy using this programming language. The type system is just about enough to be helpful, and not too much to be in your way.
Compiling the codebase is quite fast, compared to other languages. With a 10x, it will be so much fun to code.
Never been a big fan of MS, but must say that typescript is well done imho. thanks for it and all the hard work!
Meanwhile .NET developers are still waiting for Microsoft to use their own "inventions" like Blazor, .NET MAUI, Aspire, etc. for anything meaningful. Bless them.
I know this is a port but I really hope the team builds in performance debugging tools from the outset. Being able to understand _why_ a build or typecheck is taking so long is sorely missing from today's Typescript.
Yes, 100% agree. We've spent so much time chasing down what makes our build slow. Obviously that is less important now, but hopefully they've laid the foundation for when our code base grows another 10x.
My initial interpretation of the title was that the TS team was adding support for another, faster, target such as the .NET runtime or native executables. The title could use some editing.
Sounds like they're automatically generating Go code from ts in some amount [0]. I wonder if they will open the transpilation effort, in this way you'd create a path for other TypeScript projects to generate fast native binaries
The automatic generation was mainly a step to help with manual porting, since it requires so much vetting and updating for differences in data layout; effectively all of the checker code Anders ported himself!
hi! author of the Doom thing, here. while I won't be the one to try, my answer is "absolutely yes, it will make a massive difference". Sub-1-day Doom-first-frame is probably a possibility now, if not much more because actually the thing that was the largest bottleneck for Doom-in-TypeScript-types was serializing the type to a string, which may well be considerably more than 10x faster. Hopefully someone will try some day!
Yea, sounds like cross platform AOT compiled C# not being mature and performant was a big reason that C# was rejected.
One other thing I forgot to mention was that he talked about how the current compiler was mostly written as more or less pure functions operating on data structures, as opposed to being object oriented, and that this fits very well with the Go way of doing things, making 1:1 port much easier.
Immaturity of native AOT sounds like a likely culprit here. If they're after very fast startup times running classic C# is out. And native AOT is still pretty new.
You can write pure functions operating on data structures in C#, it's maybe not as idiomatic as in Go, but it should not cause problems.
I don't really get the OOP arguments from Anders. You don't need to do OOP stuff in C# - just write a bunch of static functions if you want. However, I totally get the AOT aspect. Creating a simple cli app meant for wide distribution in .NET isn't great because you either have to ship the runtime or try to use AOT which is very much a step out. I have come to the same conclusion and used Go on some occasions for the same reason despite not knowing it very well.
If doing a web server, on the other hand, these things wouldn't matter at all as you would be running a container anyway.
same reason i hate gradle/maven/ant: shipping a big runtime that many devs won't have installed for a build tool is bad. even with AOT, you still need a dotnet runtime.
Given the direction and efforts into projects like rspack, rolldown, etc. Why were they not considered as possible collaboration projects or integrations for this?
This isn't a knock against Go or necessarily a promotion of Rust, just seems like a lot of duplicated effort. I don't know the timelines in place or where the community projects were vs. the internal MS project.
So the end goal is that I can write a typescript application and deploy an executable to my server? Or is it just to deliver faster versions of typescript tools and MS developed typescript applications?
This is amazing. Everyone that picked TS for a big project was effectively betting that someone would do this at some point, so it's incredible to see it finally happen. Thanks to everyone involved!
Typescript compiles to javascript, so does this not prove what people have been screaming from the rooftops for so long that there's a significant performance penalty with typescript for almost no actual benefit?
> a significant performance penalty with typescript
There's a significant performance penalty for using javascript outside the browser.
I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.
It's generally also very difficult to make a JS program as fast as even a naive go program, and the performance tooling for go is dramatically more mature.
You seem to be referring to runtime performance of compiled code. The announcement is about compile times; it's about the performance of the compiler itself.
One question that springs to mind is the in-browser "playground" and hosted coding use-case. I assume WASM will be used in that scenario. I'm wondering what the overhead is there.
Wow, this is huge! A 10x speedup is going to be game-changing for large TypeScript codebases like ours.
I've been waiting for something like this - my team's project takes forever to typecheck on CI and slows down our IDE.
Hopefully this would also reduce the memory footprint because my VS Code intelisense keeps crashing unless I give it like 70% of my RAM, its probably because of our fairly large graphql.ts file which contains auto-generated grapqhl types.
It’s not obvious from the text, but the compiler was previously written in TypeScript (which was kind of a strange choice for the language to write a compiler in).
“Nice” doesn’t mean “well suitable for writing a compiler in”. It’s strange to think that all languages should be equally good for writing all kinds of things, and choosing a web language for a non-web task is doubly strange.
Its not strange, its very common. Its called "bootstrapping".
> Bootstrapping is a fairly common practice when creating a programming language. Many compilers for many programming languages are bootstrapped, including compilers for ALGOL, BASIC, C, C#, Common Lisp, D, Eiffel, Elixir, Go, Haskell, Java, Modula-2, Nim, Oberon, OCaml, Pascal, PL/I, Python, Rust, Scala, Scheme, TypeScript, Vala, Zig and more.
It is fairly common, yes. Sometimes those compilers (or interpreters) aren't the primary implementation, but it's certainly a thing that happens often.
Most of the Rust compiler is in Rust, that's correct, but it does by default use LLVM to do code generation, which is in C++.
Some pattern matching occurs in the function match-case-to-casequal. This is why it is preceded by a dummy implementation of non-triv-pat-p, a function needed by the pattern matching logic for classifying whether a pattern is trivial or not; it has to be defined so that the if-match and other macros in the following function can expand. The sub just says every pattern is nontrivial, a conservative guess.
non-triv-pat-p is later redefined. And it uses match-case! So the pattern matcher has bootstrapped this function: a fundamental pattern classification function in the pattern matcher is written using pattern matching. Because of the way the file is staged, with the stub initial implementation of that function, this is all boostrapped in a single pass.
Few things are more Microsofty than a team reaching over to a competitor's language instead of using their own and to boot none of the reasons given so far seem credible, good job to the team nonetheless.
Totally agree about reasons, they have some hidden agenda behind this decision that they don't want to disclose. Rewriting in native code allows step-by-step rewrite using JS runtime with native extensions, but moving to a different VM mandates big rewrite.
My most plausible guess would be that compiler writers don't want to dig into native code and performance, writing a TS to Go translator looks like a more familiar task for them. Lack of JS version performance analysis anywhere in the announcements kinda confirms this.
My read on why Go and not AOT C# is it would be more difficult to get a C# programmers to give up idiomatic OOP in C# than it would be to get C# programmers to switch to Go. Go is being used as a forcing function to push dev cultural change. This wouldn't generalize to teams that have other ways of dealing with cultural change.
I love all this native tooling for JS making things faster.
I kinda wonder, though, if in 5 or 10 years how many of these tools will still be crazy fast. Hopefully all of them! But I also would not be surprised if this new performance headroom is eaten away over time until things become only just bearable again (which is how I would describe the current performance of typescript).
Even if they freeze typescript development after the native implementation, given that the current performance was apparently acceptable to the current users, type complexity will just grow to use up the headroom
Plus, using TS directly to do runtime validation of types will become a lot more viable without having to precompile anything. Not only serverside, we'll compile the whole thing to WASM and ship it to the client to do our runtime validation there.
This will be very welcome. I've been working on refactoring very large Typescript files in a very large solution in VS2022. Sometimes it gets into a state where just editing the code or copy/pasting causes it to hang for a few seconds and the fans on my workstation to take off like a jet engine. The typing advantages my team has gotten from migrating our codebase to Typescript has been invaluable, but the performance implications really hurt.
If you squint, Porffor[1] might end up being something like that.
It doesn't use type hints yet, and the difficulty there is that you'd need a sound type system in order to rely on the types. You may be able to use type hints to generate optimized and fallback functions, with type guards, but that doesn't exist yet and it sounds like the TypeScript team wants to move pretty quickly with this.
This is what I would have liked too: Figure out a sufficient subset of TypeScript that can be compiled to native/WASM and then write TSC in that subset.
While I like faster TSC, I don't like that the TypeScript compiler needs to be written in another language to achieve speed; it kind of reminds everyone that TS isn't a good language for complicated CPU/IO tasks.
Given that the TypeScript team has resigned to the fact that JavaScript engines can't run the TypeScript compiler (TSC) sufficiently fast for foreseeable future and are rewriting it entirely in Go, then it is unlikely they will seek to do AOT.
> The JS-based codebase will continue development into the 6.x series, and TypeScript 6.0 will introduce some deprecations and breaking changes to align with the upcoming native codebase.
> While some projects may be able to switch to TypeScript 7 upon release, others may depend on certain API features, legacy configurations, or other constraints that necessitate using TypeScript 6. Recognizing TypeScript’s critical role in the JS development ecosystem, we’ll still be maintaining the JS codebase in the 6.x line until TypeScript 7+ reaches sufficient maturity and adoption.
It sounds like the Python 2 -> 3 migration, or the .Net Framework 4 -> .Net 5 (.Net Core) migration.
I'm still in a multi-year project to upgrade past .Net Framework 4; so I can certainly empathize with anyone who gets stuck on TS 6 for an extended period of time.
Better a language that deprecates and breaks things at regular intervals of time compared to a language that has Forever Backward Compatibility like C++ and evolves into a mutated, tentacled monster that strangles developers who are trying to maintain a project.
I lived and worked through the Python 2->3 fiasco, working on a Python library that had to run on both versions. I have since abandoned the language. Python3 was both slower and and not backwards compatible whereas TSC 7 is 10x faster and uses half the memory. I'm not worried.
This is mostly about the tooling and ecosystem, they want to stop things from depending on the internal workings of the compiler. If you just want to write and compile TS you'll be fine, it does not mean breaking changes to actual TypeScript grammar.
Yeah, this is not ideal. I’m hoping that the breaking changes don’t affect the code at my work, since we also had to spend multiple years on a major .NET Core transition. I want the faster compiles right away, not in a few years.
I really wonder why this project have not been developed in .NET core. I would have then been possible to embed this in .NET projects increasing the available number of libraries in the ecosystem. Also it woul have leverages .NET GC which is better than Go. Rewriting in Go really doesn't make sense to me.
Oh man, this is great. I've been having performance issues with TSC for language services.
My theory - that Go will always be the choice for things like this when ease, simplicity, and good (but not absolute) performance is the goal - continues to hold.
This is great news. We actually use esbuild most of the time to transpile TS files because tsc is so slow (and only run tsc in CI/CD pipelines). Coincidentally, esbuild is also golang
This is specifically about the performance of the TypeScript toolchain (compiler, editor experience); the runtime code generated is the same. TypeScript is just JS with types.
Not sure if this point was brought up but I think it's worth considering.
If the Typescript team were to go with Rust or C# they would have to contend with async/await decoration and worry about starvation and monopolization.
Go frees the developer from worrying about these concerns.
Faster compilation is great, but what I'm really excited for is a faster TS Language Server. Being able to get autocomplete hints, hover info, goto definition, error squiggles and more anything close to 10x faster is going to be revolutionary when working in large TS codebases.
Has there been any talks/progress on native inclusion of typescript for type checking, for path resolution with node.js without using tsc, ts-node, tsx, native vscode TS debugging and testing support? We are 22 versions down on node.js and still the support seems to be limited at best. Is it possible to maybe share a roadmap of what is being done in this territory
This will only allow you to run your TypeScript in Node, but does not perform type checking, and I don't believe has any plans to. This is from Node.js 23.9.0
Something that kind of got understated in here IMO is the improved refactoring and code intelligence that this will unlock. Very exciting! I am looking forward to all the new tooling and frameworks that come out of this change. TS is already an amazing language and just keeps getting better!
Typescript was the best thing that ever happened to the web! Thanks Daniel, Ryan and Anders and the rest of the team for making development great for over 10 years! This improvement is amazing!
TS v5.8 added the --erasableSyntaxOnly option, along with Node.js 23.6 so you can run your TS in Node, which will error on enums (as well as namespaces and other syntax). I haven't found anything that mentions the deprecation of enums when searching, TS v6 is supposed to be as feature compatible with v7 as possible, and since enums are not a type-level feature of JS I wouldn't rely on them.
Right now you can make use of the --erasableSyntaxOnly to find any enums in your code, and start porting over to an alternative. This article lists alternatives if you're interested.
There are various ways to (de)couple the compiler to/from vscode, but it's definitely handy to have inline typechecking. Is this possible without running the compiler?
I'll give Typescript yet another go. I really like it and wish I could use it. It's just that any project I start, inevitably the sourcemap chain will go wrong and I lose the ability to run the debugger in any meaningful way.
yes, this will definitely vastly increase the Doom fps, haha (I’m the guy that did that project). But I think there’s a lot more to it than that.
tl;dr — Rust would be great for a rewrite, but Go makes way more sense for a port. After the dust settles, I hope people focus on the outcomes, not the language choice.
I was very surprised to see that the TypeScript team didn’t choose Rust, not just because it seemed like an obvious technical choice but because the whole ecosystem is clearly converging on Rust _right now_ and has been for a while. I write Rust for my day job and I absolutely love Rust. TypeScript will always have such a special place in my heart but for years now, when I can use Rust.. I use Rust. But it makes a lot of sense to pick Go.
The key “reading between the lines” from the announcement is that they’re doing a port not a rewrite. That’s a very big difference on a complex project with 100-man-years poured into it.
Places where Go is a better fit than Rust when porting JavaScript:
- Go, like JavaScript and unlike Rust, is garbage collected. The TypeScript compiler relies on garbage collection in multiple places, and there are probably more that do but no one realizes it. It would be dangerous and very risky to attempt to unwind all of that. If it were a Rust rewrite, this problem goes away, but they’re not doing a rewrite.
- Rust is so stupidly hard. I repeat, I love Rust. Love it. But damn. Sometimes it feels like the Rust language actively makes decisions that demolish the DX of the 99.99% use-case if there’s a 0.001% use-case that would be slightly more correct. Go is such a dream compared to Rust in this respect. I know people that more-or-less learned Go in a weekend and are writing it professionally daily. I also know people that have been writing Rust every day professionally for years and say they still feel like noobs. It’s undeniable what a difference this makes on productivity for some teams.
Places where Go is just as good a fit as Rust:
- Go and Rust both have great parallelism/concurrency support. Go supports both shared memory (with explicit synchronization) and message-passing concurrency (via goroutines & channels). In JavaScript, multi-threading requires IPC with WebWorkers, making Go’s concurrency model a smoother fit for porting a JS-heavy codebase that assumes implicit shared state. Rust enforces strict ownership rules that disallows shared state, or we can at least say makes it a lot harder (by design, admittedly).
- Go and Rust both have great tooling. Sure, there are so many Rust JavaScript tools, but esbuild definitively proves that Go tooling can work. Heck, the TypeScript project itself uses esbuild today.
- Go and Rust are both memory safe.
- Go and Rust have lots of “zero (or near zero) cost abstractions” in their language surface. The current TypeScript compiler codebase makes great use of TypeScript enums for bit fiddling and packing boolean flags into a single int32. It sucks to deal with (especially with a Node debugger attached to the TypeScript typechecker). While Go structs are not literally zero cost, they’re going to be SO MUCH nicer than JavaScript objects for a use-case like this that’s so common in the current codebase. I think Rust sorta wins when it comes to plentiful abstractions, but Go has more than enough to make a huge impact.
Places where Rust wins:
- the Rust type system. no contest. In fairness, Go doesn’t try to have a fancy type system. It makes up for a lot of the DX I complained about above. When you get an error that something won’t compile, but only when targeting Windows because Rust understands the difference in file permissions… wow. But clearly, what Go has is good enough.
- so many new tools (basically, all of them that are not also in JS) are being done in Rust now. The alignment on this would have been cool. But hey, maybe this will force the bindings to be high-quality which benefits lots of other languages too (Zig type emitter, anyone?!).
By this time next week when the shock wears off, I just really hope what people focus on is that our TypeScript type checking is about to get 10 times faster. That’s such a big deal. I can’t even put it into words. I hope the TypeScript team is ready to be bombarded by people trying to use this TODAY despite them saying it’s just a preview, because there are some companies that are absolutely desperate to improve their editor perf and un-bottleneck their CI. I hope people recognize what a big move this is by the TypeScript team to set the project up for success for the next dozen years. Fully ejecting from being a self-hosted language is a BIG and unprecedented move!
A tiny thing that's not relevant to this particular piece of work but is worth having in background when thinking about Go is that while Go would like Python typically be described as "memory safe" unlike Java (or more remarkably, Rust) it is very possible for naive programmers to cause undefined behaviour in this language without realising it.
Specifically if you race any non-trivial Go object (say, a hash table, or a string) then that's immediately UB. Internally what's happening is that these objects have internal consistency rules which you can easily break this way and they're not protected against that because the trivial way to do so is expensive. Writing a Go data race isn't as trivial as writing a use-after-free in C++ but it's not actually difficult to do by mistake.
In single threaded software this is no caveat at all, but most large software these days does have some threading involved.
Especially given Anders is the one announcing this, given he was the chief architect of C#. But C# AOT is maybe not as mature/lightweight as a Go binary and clearly startup time here is very important. [Edit: the real reason is in the FAQ posted in a bunch of other comments https://github.com/microsoft/typescript-go/discussions/411]
Some big projects have so many people trying to do PRs that it's actually a bit of a hassle to deal with them all. So I don't think maximising the number of contributors should necessarily be one of the top goals for projects that are already big or have guaranteed relevance.
Is learning a language even a thing anymore with $Internal_or_external_LLM_helper plugin available for every IDE? I haven't found syntax lookups to be that much a concern anymore and any boneheaded LLM suggestions are trivial to detect/fix.
"Memory safety" is a term of art meaning susceptibility to memory corruption attacks. They had to come up with some name for it; that's the name they came up with. This is a perennial tangent in conversations among technologists: give something a legible name, and people will try to axiomatically (re)define it.
Rust is memory safe. Go is memory safe. Python is memory safe. Typescript is memory safe. C++ is not memory safe. C is not memory safe.
This is true in that if you pass pointers through go routines, you do not have guarantees about what’s at the end of that pointer. However, this is “by design” in that generally you shouldn’t do that; the overhead the go memory model places on developers is to remember what’s passed as value and what’s passed as a pointer, and act accordingly. The rest it takes care of for you.
The burden placed by rust on the developer is to keep track of all possible mutability and readability states and commit to them upfront during development. (If I may summarize, been a long time since I wrote any Rust). The rest it takes care of for you.
The question of which a developer prefers at a certain skill level, and which a manager of developers at a certain skill level prefers, is going to vary.
I mean, no? That's basically a known bug in Rust's compiler, specifically it's a soundness hole in type checking, and you'd basically never write it by accident - go read the guts of it for yourself if you think you might accidentally do this.
At some point a next generation solver will make this not compile, and people will probably invent an even weirder edge case for that solver.
Whereas the Go example is just how Go works, that's not a bug that's by design, don't expect Go to give you thread safety that's not what they promised.
thank you for the clarification. you're right. I guess I was just trying to say that it's a spectrum (even if Rust is very very far along the way towards not having any holes). I can't seem to find it but there's some Tony Hoare or maybe Alan Turing quote or something like that about the only 100% correct computer program to ever exist was the first one.
Segfaults are very much a memory safety issue. You are correct that concurrency is the cause here, but that doesn't mean it's not a memory safety issue.
That said, most people still call Go memory safe even in spite of this being possible, because, well, https://go.dev/ref/mem
> While programmers should write Go programs without data races, there are limitations to what a Go implementation can do in response to a data race. An implementation may always react to a data race by reporting the race and terminating the program. Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten. These implementation constraints make Go more like Java or JavaScript, in that most races have a limited number of outcomes, and less like C and C++, where the meaning of any program with a race is entirely undefined, and the compiler may do anything at all.
I believe that most JVMs implement dynamic dispatch in a similar manner to C++, that is, classes are on the heap, and have a vtable pointer inside of them. Whereas Go's interfaces can work like Rust's trait objects, where they're a pair of (data pointer, vtable pointer). So the behavior we see here with Go is unlikely to be possible in Java, because the tear wouldn't corrupt the vtable pointer, because it's inside what's pointed at by the initial pointer, rather than being right after it in memory.
These bugs do happen, but they have a more limited blast radius than ones in languages that are clearly unsafe, and so it feels wrong to lump Go in with them even though in some strict sense you may want to categorize it the other way.
Sure, that's all true. It does limit Go's memory safety guarantees. However, I still believe that just because Java and other languages can give better guarantees around the blast radius of concurrency bugs does not mean that Go's definition of memory safety is invalid. I believe you can justifiably call Go memory-safe with unsafe concurrency. This may give people the wrong idea about where exactly Go fits in on the spectrum of "safe" coding (since, like you mentioned, some languages have unsafe concurrency that is still safer,) but it's not like it's that far off.
On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare. And I don't mean concurrency bugs: Go's concurrency facilities kind of suck, so those are certainly not excessively rare, even if they're less common than I would have expected. However... not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
So how severely you treat this lapse is going to come down to taste. I see the appeal of Rust's iron-clad guarantees around limiting the blast radius, but of course everything comes with limitations. I believe that any discussion about the limitations of guarantees like these should have some emphasis on the real impact. e.g. It's easy enough to see that the issues with memory management in C and C++ are serious based on the security track record of programs written in C and C++, I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
> On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare.
I both want to agree with this, but also point to things like https://www.uber.com/en-CA/blog/data-race-patterns-in-go/, which found a bunch of bugs. They don't really contextualize it in terms of other kinds of bugs, so it's really hard to say from just this how rare they actually are. One of the insidious parts of non-segfaulting data race bugs is that you may not notice them until you do, so they're easy to under-report. Hence the checker used in the above study.
> not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
For sure, absolutely. And I do think that's meaningful and important.
> I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
Yep, and I do suspect it'll be closer to Java than to C.
The Uber page does a pretty good job of summing it up. The only thing I'd add is that there has been a little bit of effort to reduce footguns since they've posted this article; as one example, the issue with accidentally capturing range for variables is now fixed in the language[1]. On top of having a built-in (runtime) race detector since 1.1 and runtime concurrent map access detection since 1.6, Go is also adding more tools to make testing concurrent code easier, which should also help ensure potentially racy code is at least tested[2] (ideally, with the race detector on.) Accidentally capturing named return values is now caught by a popular linting tool[3]. There is also gVisor's checklocks analyzer, which, with the help of annotations, can catch many misuses of mutexes and data protected by mutexes[4]. (This would be a lot nicer as a language feature, but oh well.)
I don't know if I'd evangelize for adopting Go on the scale that Uber has: I think Go works best for shared-nothing architectures and gets gradually less compelling as you dig into more complex concurrency. That said, since Uber is an early adopter, there is a decent chance that what they have learned will help future organizations avoid repeating some of the same issues, via improvements to tooling and the language.
Don't think so, he stated one of the most important reasons was code compatibility, not specifically a good concurrency support (but this was important, indeed). I think even the most functional languages would not be easily compatible with "functional typescript code" without hard modifications.
But either way, there is space for innovation in the field, I'm yet to see a ML-family language with concurrency that is as "hands on" as Go is, it would be extremely interesting to see this happening.
Funnily enough, that's exactly what they're doing in this announcement. They're rewriting `tsc` in Go and shipping native binaries, rather than shipping JS.
That's really not what's stopping TS being built in to browsers. Have a look at the discussions around the types-as-comments proposal https://tc39.es/proposal-type-annotations/
This kinda begs the question: should we port all backend Typescript code to Go (or Rust) to get a similar runtime performance improvement? Is Typescript generally this inefficient?
If your backend is JS and it's too slow for you, then obviously porting it to a machine code binary will speed it up significantly. If you are happy with your backend performance, then does it matter?
The post title is a bit misleading. It should say a 10x faster build time, or a 10x faster TypeScript compiler. tsc (compiler) is 10x faster, but not the final TS program runtime. Still an amazing feat! But doom will not run faster
"To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage."
To clarify why it's actually not that ambiguous: TS is not (and does not have) a runtime at all. Even TS-first runtimes like Deno are (1) not TS but its own thing and most importantly (2) just JS engines with a frontend layer that treats TS as a first-class citizen (in Deno's case, V8).
It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further (e.g. by proving that a function diverges) but to my knowledge they currently don't and I don't think there's any in the works (or if that's even possible while maintaining runtime soundness, considering you can "lie" to TS by casting to `unknown` and then back to any other type).
“faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.
Just like if you said faster C++ that could mean the compiler runs faster, or the resulting machine code runs faster.
Just because the compile target is another human readable language doesn’t mean it ceases to be a typescript program.
I didn’t think this particular example was very ambiguous because a general 10x speed up in the resulting JS would be insane, and I have used typescript enough to wish the compiler was faster. Though if we’re being pedantic, which I enjoy doing sometimes, I would say it is ambiguous.
> “faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.
That still wouldn't make sense, in the same way that it wouldn't make sense to say "Python type hints found a way to automatically write more performant Python". With few exceptions, the TypeScript compiler doesn't have any runtime impact at all — it simply removes the type annotations, leaving behind valid JavaScript that already existed as source code. In fact, avoiding runtime impact is an explicit design goal of TypeScript [1].
They've even begun to chip away at the exceptions with the `erasableSyntaxOnly` flag [2], which disables features like enums that do emit code with runtime semantics.
Thanks for the clarification. For those of us who don't use TypeScript day to day, I feel that it is ambigious. Without clicking the link, you wouldn't know if it's about a compiler or a runtime. What if they announced a bun competitor?
Those are javascript runtimes, not TypeScript runtimes. The point stands.
If you don't know enough about TypeScript to understand that TypeScript is not a runtime, I'm not sure why you would care about TypeScript being faster (in either case).
that's not the point I was making - gp was wondering why someone who didn't even know typescript compiled to javascript and ran atop a javascript engine would care that it had gotten 10x faster.
From the title, my initial assumption was someone wrote a compiler & runtime for typescript that doesn't target javascript, which was very exciting. And I do work with typescript.
It has become a sport here to criticize titles for not explaining any random thing the commenter doesn't know. Generally these things are either in the article or they are very easily findable with a single web search.
It was ambiguous to me. I've used TS a few times over the years, so I thought "native TypeScript compiler" meant AOT TS, not a TS compiler written in Go
there is static hermes from Meta that do AoT compilation to native so I find it actually ambiguous. For a second I thought they did a compiler instead of transpile r.
I don't think this is misleading for anyone familiar with Typescript. Typescript itself has no impact on performance, and it is known that the compilation and type-checking speed is often a problem. So I immediately assumed that it was about exactly that.
When I read the title I thought maybe they implemented a typescript to binary (instead of javascript) code compiler that speeds up the program by 10x, it would also have the added benefit of speeding up the compiler by 10x!
I don't think that is too far fetched either since typescript already has most of the type information.
For anyone who uses TypeScript on a daily basis it's not ambiguous at all. Everyone who works with TS knows the runtime code is JavaScript code that is generated by the TypeScript compiler. And it's also pretty common knowledge that JavaScript is quite fast, but TS itself is not.
I don't think it's misleading at all, because you can't run Typescript. Typescript is either compiled, transpiled or stripped down into another language and that's what gets run in the end.
You can't run Java either as it's compiled to bytecode, yet when someone says "we made Java 10x faster" you wouldn't assume that just the compilation got faster, right? When people market Rust projects as blazingly fast nobody assumes it's about compilation, in part because a blazingly fast Rust compiler would be a miracle. Outside of this comment section people have always been using a programming language name for this because everyone knows what they mean.
It would be possible that MS wrote a TypeScript compiler that emits native binaries and that made the language 10x faster, why not?
You could make the same argument of anything but bytecode and even then some would debate if it's really running directly enough on modern CPUs. In the end it still remains that you have the time it takes to build your project in a given language and the runtime performance of the end result. Those remain very useful distinctions regardless of how many layers of indirection occur between source code and execution.
The difference here is that with Typescript, you're not really measuring Typescript's performance, but whatever your output language is. If transpile to Javascript, you're measuring that, if you output Wasm, you measure that, etc, and the result isn't really dictated by Typescript.
Transpiling isn't the only possibility to run TypeScript code, it's just the way to do it right now. A long time ago interpreting was the most common way to run JavaScript, now it's to JIT it, but you can also compile it straight to platform byte code or transpile it to C if you really want. That you could transpile JavaScript to C doesn't mean all ways of doing it would be equally performant though.
Transpiling in itself also doesn't remove the possibility of producing more optimized code, especially if the source has more information about the types. The official TypeScript compiler doesn't really do any of that right now (e.g. it won't remove a branch about how to handle a variable if its type equals a number even if it has the type information to know it can't have been set to one). Heck, it doesn't even (natively, you can always bolt this on yourself) support producing minified transpiled code to improve runtime parsing! In both examples it's not because transpilation prevents optimization though, it's just not done (or possibly worthwhile if TS only ever targets JS runtimes as JS JIT is extraordinarily good these days).
Not really in the case of TypeScript, because (with very small exceptions) when you “compile” TypeScript you are literally just removing the TypeScript, leaving plain JavaScript. It’s just type annotations; it doesn’t describe any runtime behavior at all.
That depends on both the target and the typescript features you use. In many cases, even when down leveling isn't involved, transpiled code can result in more than just stripping type info (particularly common in classes or things with helper functions). There's also nothing stopping a typescript compiler from optimizing transpiled (or directly compiled) code like any other compiler would, though the default typescript tools don't really go after any of that (or even produce a minified version itself using the additional type hints).
Agreed, at least usually right now (it doesn't have to be forever, which would probably be the most realistic way for TypeScript to make meaningful runtime gains). That does not preclude the possibility of producing more optimal JavaScript code for the runtime to consume. I give a couple examples of that in the other comments.
look, not to argue with a stranger on hacker news, lol, but genuine calm question here: is this really a helpful nit? I know what you're getting at but the blogpost itself doesn't imply that JavaScript is 10x faster. I could complain, about your suggested change, that it's really `build and typecheck` time. It's a title. Sometimes they don't have _all_ the context. That's ok.
It is for me. If someone says TypeScript is faster than X, they rarely mean the build time. I understand other people's points about TypeScript not being a runtime at all and only being a compiler, but when casually saying "TypeScript is faster than say ruby", people do not mean the compiler.
But no one actually says "TypeScript is faster than say ruby". They probably say "node is faster than say ruby" or maybe "bun is faster than say ruby". Perhaps they say "JavaScript is faster than say ruby", although even that is underspecified.
well, thanks for explaining. we might just simply disagree here. when I hear "TypeScript" I think of TypeScript, and when I hear "JavaScript" I think of JavaScript. I know what you mean re: casually speaking, but this is a blogpost from the TypeScript team. That context is there, too. I think if the same title were from an AWS release note, I'd totally see what you mean.
Typescript is JavaScript at runtime. It’s not a separate language, just like Python with type annotations (TypePython?) is just Python at runtime. Both are just type annotations that get stripped away before anything tries to run the code. That’s the genius of the idea and why it’s so easily adopted.
It is quite literally a separate language. Python's type hints are a part of the Python specification and all valid Python type hints will run in any compliant Python runtime. Typescript is not, in any way, valid JavaScript. The moment you add any type syntax, you can no longer run the code in Node or Browsers without enabling a special preprocess step.
Then read the article? I don't get it - Typescript, to anyone familiar, is not a language runtime. It does not optimize. It is a transpiler. If you don't even know this much about Typescript, you aren't the audience and lack prerequisite knowledge. Go read anything on the topic.
If someone posted an article talking about the "handedness" of DNA or something, I wouldn't complain "oh, you confused me, I thought you were saying DNA has hands!"
I agree with pseudopersonal in that the title should be changed. technically it's not misleading, but not everyone uses or is familiar with typescript.
This seems pedantic. As a TypeScript user who is aware of the conversations about build performance, the title is not ambiguous at all. I know exactly they are talking about build time.
It was ambiguous to me. When someone says making a language X-times faster, it's natural to think about runtime performance, not compile times. I know TS runs on JS runtimes, but I assumed, based on the title, they created/modified a JS runtime to natively run TS fast.
The explanations are of course correct, but I think you're right and there's not much downside to being clearer in the title. Maybe they decided against saying "compiler" because the performance boost also covers the language server.
People seem very hurt that the creator of C# didn't pick C# for this very public project from a multi-trillion-dollar corp. I find it very refreshing, they defined logical requirements for what they wanted to do and chose Golang because it ticked more boxes than C#. This doesn't mean that C# sucks or that every C# project should switch to Golang, but there seems to be a very vocal minority affected by this logical decision.
I didn't include every variant I've ever read, but there have been no shortage of people saying that the only thing that matters is your algorithms.
Every time I've said that languages like Python, JavaScript, and basically any other language where it's hard to avoid heap allocations, pointer chasing, and copious data copies are all slow, there are plenty of people who come out of the woodwork to inform me that it's all negligible.
no shortage of people saying that the only thing that matters is your algorithms.
To be a little bit fair to those people, I have been in many situations where people go "my matlab/python code is too slow, I must re-write it in C", and I've been able to get an order of magnitude improvement by re-writing the code in the same language. Hell I've ported terrible Fortran code to python/numpy and gotten significant performance improvement. Of course taking that well written code and re-writing that in well written C will probably give you a further order of magnitude improvement. Fast code in a slow language can beat slow code in a fast language, but obviously never beat fast code in a fast language.
For sure. I agree with everything you say, and I've experienced the same thing 100 times, myself--including the specific scenario of speeding up someone's MATLAB code by multiple orders of magnitude by vectorizing the crap out of it. People seem to be almost drawn to quadratic-or-worse algorithms, even when I'd expect them to know better.
I'm just a little bitter because of how many times I've been shushed in places like programming language subreddits and here when I've pointed out how inefficient some cool new library/framework/paradigm is. It feels like I'm either being gaslit or everyone else is in denial that things like excessive heap allocations really do still matter in 2025, and that JITs almost never help much with realistic workloads for a large percentage of applications.
All the bootcamp cargo culting crew have pumped these lies such as "the language doesn't matter" or "learn coding in 1 week for a SWE job with JS / TS" and it has caused the increase in low quality software and with several developers asking how to improve or add "performance" optimizations as such.
What we have just seen is that the TS team has admitted that a limit has been reached and *almost always* the solution is either porting it to a compiled language or relying on scaling with new computers with new processors in accordance to Moore's Law to get performance for free.
Now the bootcampers are rediscovering why we need "static typing" and why a "compiled language" is more performant than a VM-based language.
Can you imagine the progress we could've made by now if people just tried to use the right tool for the job instead of trying to make the wrong tool good enough?
All the time spent trying to optimize JITs for JavaScript engines, or alternative Python implementations (e.g., PyPy), and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications. Ugh...
Many people say this, but it is obviously bullshit. But most things people say all the time is bullshit, so I would not bother with it that much, it's not like people are saying "Programming languages don't matter, see here my affirmation is backed by a hundred statistics and data heavily reviewed and strong literature", it is more like "Programming languages don't matter, well at least I feel like it, the same way flowers smell like blue or something".
Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.
As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.
I think JS can really zoom if you let it. Hamsters.js, GPU.js, taichi.js, ndarray, arquero, S.js, are all solid foundations for doing things really efficiently. Sure, not 'native' performance or on the compile side, but having their computational models in mind can really let you work around the language's limitations.
JS can be pretty fast if you let it, but the problem is the fastest path is extremely unergonomic. If you always take the fastest possible path you end up more or less writing asm.js by hand, or a worse version of C that doesn't even have proper structs.
I find these userland libraries particularly effective, because you'll never leave JS land, conveniently abstracting over Workers, WebGL/WebGPU and WASM.
JS, interestingly, has a notion of integers, but only in the form of integer arrays, like Int16Array.
I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.
AssemblyScript (for WASM) and Huawei's ArkTS (for mobile apps) already exist in this landscape. However, they are too specific in their use cases and have never gained public attention.
> reveals something deeper: Microsoft prioritized shipping a working solution over language politics
Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".
This is not accusatory, but do you write your comments with AI? I checked your profile and someone else had the same question a few days ago. It's the persistent structure of "it isn't X – it's Y" with the em dash (– not -) that makes me wonder this. Nothing to add to your comment otherwise, sorry.
Sorry for being pedantic but they are using an en dash (–) not an em dash (—) which is a little strange because the latter is usually the one meant for adding information in secondary sentences—like commas and and parentheses. In addition, in most styles, you're not supposed to add spaces around it.
So, I don't think the comment is AI-generated for this reason.
"The en-dash is also increasingly used to replace the long dash ('—', also called an em dash or em rule). When using it to replace a long dash, spaces are needed either side of it – like so." https://en.wikipedia.org/wiki/En_(typography)
You're right, oops. I agree with your reasoning (comment still gives off slop vibes but that's unprovable). But the parent has been flagged, so I'm not sure if that means admins/dang has agreed with me or if it was flagged for another reason.
em-dash is shift-option-hyphen on macOS, so it's not a good heuristic—I use it myself.
They're using en-dash which is even easier: option-hyphen.
This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.
The em dash thing is not very conclusive. I have been writing with the em dash for many years, because it looks better and is very accessible on Mac OS (long press on dash key), while carrying a different tone than the simple dash. That, and I read some Tristram Shandy.
> The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day.
I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.
I can imagine C# being annoying to integrate into some CIs, for instance. Go fits a sweet spot, with its fast compiler and usually limited number of external dependencies.
> Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
> We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.
>The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day. Even Anders Hejlsberg – father of C# – chose Go for pragmatic reasons!
I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.
Go with parametric types is already a reasonably expressive language. Much more expressive than C in which a number of compilers has been written, at least initially; not everyone had the luxury of using OCaml or Haskell.
There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.
Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.
> it signals we've hit fundamental limits in JS/TS for systems programming
Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.
Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.
> When a language team abandons self-hosting (TS in TS) for raw performance (Go), it signals we've hit fundamental limits in JS/TS for systems programming.
I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.
I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.
But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.
Yeah, the term has changed meaning several times. Early on, "systems programmer" meant basically what we call a "developer" now (by opposition to a programmer or a researcher).
I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.
I wish there was a language like rust without the borrow checking and lifetimes that was also popular and lives in the same area as go. Because I think go is actually the best language in this category but it’s only the best because there is nothing else. All in all golang is not an elegant language.
O'Caml is similar, now that it has multicore. Scala is also similar, though the native code side (https://scala-native.org/en/stable/) is not nearly as well developed as the JVM side.
Rust loses a lot of its nice properties without borrow checking and lifetimes, though. For example, resources no longer get cleaned up automatically, and the compiler no longer protects you against data races. Which in turn makes the entire language memory unsafe.
OCaml and Haskell already have that nice type system (and even more nice). If OCaml's syntax bothers you, there is Reason [1] which is a different frontend to the same compiler suite.
Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.
So in order to get "Faster TypeScript" you have to port the existing "transpiler" in a complied language that delivers said faster performance.
This is an admission that these JavaScript based languages (including TypeScript) are just completely unsuitable for these performance and scalable situations, especially when the codebase scales.
As long as it is a compiled language with reasonable performance and with proper memory management situations, Go is the unsurprising choice, but the wise choice to solve this problem.
But this choice definitively shows (and as admitted by the TS team) how immature both JavaScript and TypeScript are in performance and scalability scenarios and should be absolutely avoided for building systems that need it. Especially in the backend.
They're not getting "faster typescript", they're getting "a faster typescript transpiler / type checker"; subtle but important difference. The runtime of TS is Javascript engines, and most of "typescript transpilation" is pretty straightforward removal of type information.
Anyway, JS is not immature in performance per se, but in this particular use case, a native language is faster. But they had to solve the problem first before they could decide what language was best for it.
It kinda is already; strip type information and you've got valid JS. NodeJS supports running Typescript nowadays with the exception of some uneraseable syntax that is being discouraged, I'm sure it's only a matter of time before that bubbles up to V8 and other browser JS engines.
Curious how this is going to affect Cursor - I'm assuming it'll just be a drop-in replacement and we can expect Cursor to get the same speed-up as VSCode.
Very pumped to see how this improves the experience in VSCode.
I've been revisiting my editing setup over the last 6 months and to my surprise I've time traveled back to 2012 and am once again really enjoying Sublime Text. It's still by far the most performant editor out there, on account of the custom UI toolkit and all the incredibly fast indexing/search/editing engines (everything's native).
Not sure how this announcement impacts VSCode's UI being powered by Electron, but having the indexing/search/editing engines implemented in Go should drastically improve my experience. The editor will never be as fast as Sublime but if they can make it fast enough to where I don't notice the indexing/search/editing lag in large projects/files, I'd probably switch back.
I can see why they didn't use Rust, I've written little languages in that myself, so I know what is involved, even though I like the language a lot. But I'm quite surprised they didn't use C#. I would have thought ahead-of-time optimized C# would give nearly the same compilation speed as Go. They do seem to be leaning into concurrency a lot so maybe its more about Go's implementation of that (CSP-like), but doesn't .Net have a near-equivalent to that? Have not used it in a while.
Also I get the sense from the video that it still outputs only JS. It would be nice if we could build typescript executables that didn't require that, even if was just WASM, though that is more of a different backend rather than a different compiler.
Hi folks, Daniel Rosenwasser from the TypeScript team here. We're obviously very excited to announce this! RyanCavanaugh (our dev lead) and I are around to answer any quick questions you might have. You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.
Hey Daniel.
I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?
Reading the article, it looks like they are writing go, so will probably be distributing go binaries.
Maybe they'll also be distributed in WASM too, which is easier to be integrated with JavaScript codebases.
Would running WASM be any faster than running JS in V8?
In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.
The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.
https://github.com/golang/go/issues/63904#issuecomment-22536...
These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.
Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.
It sounds like it would be a great fit for e.g. Lua though.
> the Wasm GC extension, which would likely be needed for anything like real parity with JS
Well, for languages that use a GC. People who are writing WASM that exceeds JS in speed are typically doing it in Rust or C++.
Yeah. If I remember it correctly, you need to compile the GC to run on WASM if the GC extension is not supported.
The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.
Apparently not good enough, given the decision to use Go.
Very likely. Migrating compute-intensive tasks from JavaScript was one of the explicit goals behind the invention of WASM.
Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.
Like others I'm curious about the choice of technology here. I see you went with Go, which is great! I know Go is fast! But its also a more 'primitive' language (for lack of a better way of putting it) with no frills.
Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)
Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?
| with no frills.
People say this like it's a bad thing. It's not, it's Go's primary strength.
Yes. For Webservers. Not for compilers. I wrote a bunch of compilers, and Go is not a language I would choose for this.
Go is exceptionally fast for a transpiler. Esbuild is a great example.. Rust would offer any significant gains vs adoption for support.
We did anticipate this question, and we have actually written up an FAQ entry on our GitHub Discussions. I'll post the response below. https://github.com/microsoft/typescript-go/discussions/411.
____
Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.
By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.
We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.
This is a great response but this is "why is Go better than JavaScript?" whereas my question is "why is Go better than C#, given that C# was famously created by the guy writing the blog post and Go is a language from a competitor?"
C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.
C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.
As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???
Anders answers that question here - https://www.youtube.com/watch?v=10qowKUW82U&t=1154s
Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."
Thanks for the link. I'm not fully convinced by Anders answer. C# has records, first class functions, structs, span. Much control and I'd say more than Go. I'd even say C# is much closer to TS than Go is. You can use records for the data structures. The only little annoyance is that you need to write the functions as static methods. So an argument for easy translation would lead to C#. Also, C# has advantages over Go, e.g. null safety.
Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.
But I don't see it as big drama.
This is a great link, thank you!
For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):
- Go being the lowest level language that still has garbage collection
- Inline structs and other data structure expressiveness features
- Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.
An unpopular pick that is probably more low level than Go but also still has a GC: D. Understandable why you wouldn't pick D though. Its ecosystem is extremely small.
I think you D fans need to dogfood a startup based around it.
It's a fascinating language, but it lacks a flagship product.
I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.
Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.
I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.
Did my post come off as a fan? I directly criticized its ecosystem. It wouldn't be my first pick either. I was just making conversation that there are other options.
And AFAIK Symmetry Investments is that dogfood startup.
A missed opportunity to improve c# by dogfooding it with TS compiler rewrite.
They are trying to finish their current project and not redo all the projects which their current project may depend upon.
"Finish"?
C# is too old to change that drastically, just like me
> "given that C# was famously created by the guy writing the blog post"
What is this logic? "You worked on C# years ago so you must use C# for everything"?
"You must dictate C# to every team you lead forever, no matter what skills they have"?
"You must uphold a dogma that C# is the best language for everything, because you touched it last"?
Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.
Yes, but C# is the Microsoft language, and I would say TypeScript is 2nd place Microsoft language (sorry F# folks - in terms of popularity not objective greatness of course).
So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.
If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?
funny you bring up this analogy. tons of auto manufacturers these days will license other mfgs' engines and use them in your cars. e.g. a fair number of Ford's cars have had Mazda engines and a fair number of Mazdas have had Ford engines.
Could you give some examples of both? Also, why did they choose to do this?
Toyota 86 and Subaru BRZ are basically the same car. The car was designed by Toyota while Subaru supplied the engine. Just one example.
F# isn't in the running for third either.
Maybe top ten behind MSSQL, Powershell, Excel Formulae, DAX etc.
hey, there are dozens of us F# users! dozens!
I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.
It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.
> Newer .net features like spans and ref structs are missing with no clear path to implementation
Huh? They're already implemented! It took years and they've still got some rough edges, yes, but they've been implemented for a few years now.
Agreed with the rest, though. As much as I love working with F#, I've jumped ship.
> "So it's not just that the lead architect of C# is involved in the TypeScript changes."
Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:
https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"
Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.
> "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"
Like these? https://www.slashgear.com/1642034/fords-powered-by-non-ford-...
> "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird
It's also not what anyone said.
> It's best not to use quotation marks to make it look like you're quoting someone when you're not. <https://news.ycombinator.com/item?id=21643562>
It's a bad look for both C# and TypeScript. Anybody starting a new code base now would be looking for ways to avoid both and jump right to Go.
I'm struggling to understand how this is a bad look for Typescript. Do you mean that the specific choice of Go reflects poorly on Typescript, or just the decision to rewrite the compiler in a different non-TS language?
If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.
I think a lot of folks downplay the performance costs for the convenience of a shared code-base between the front and backend.
If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.
If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?
If they're writing (actually porting) a _compiler_, perhaps.
if I had to use Go I’d change my career and go do some gardening :)
I like Anders'answer there. "But you can achieve pretty great things with it".
I actually really enjoy Go. Sure it has a type system I wish was more powerful with lots of weird corners ( https://100go.co/ ), but it also has REALLY GOOD tooling- lots of nice libraries, the compiler is fast, the editor tooling is rock solid, it's easy to add linters to warn you about many issues (golangci-lint), and releasing binaries and updating package repositories is super nice (Goreleaser).
I'd probably have said the same 5 years ago, it's surprising how easy you change sides once you actually use it in a team.
I was mostly joking… some of the most amazing shit code-wise I have seen in “non-mainstream” languages (fortran leads the way here)
I had to, and I do think a lot about gardening these days...
Go doesn't run in the browser however (except WASM but that is different).
> Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those?
as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.
GP's answer is a great answer to why Go instead of Rust, which u/no_wizard asked about. And the answer to that boils down to the need to traverse data structures in ways which Rust makes difficult, and the simplicity of a GC.
[dead]
[flagged]
C# is a decently-designed language, but its first principles are being microsoft-y and java-y, which are perhaps two of my least favorite principles. that aside, i've worked on C# backends deployed to lots of linux boxes and it's not really second-rate these days.
Microsoft's implementation has been cross platform for almost a decade now. You're way too late to the Mono FUD party.
Almost a decade? Amazing. Considering go has been cross platform since its inception almost twice as long as that, rust too, it’s no wonder developer mindshare is elsewhere.
It’s a political anti-benefit in most of the open-source world. And C# is not considered a high quality runtime once you leave Windows.
This is Anders Hejlsberg, the creator of C#, working on a politically important project at Microsoft. That's what I mean by political benefit. The larger open source world doesn't matter for this decision which is why this is a simple announcement of an internal Microsoft decision rather than an invitation for comments ahead of time.
I’m sure Microsoft’s strategy department would disagree with you. As a c# devotee - I get that you’re upset. And you may want to update your priors on where c# sits in Microsoft’s current world. But I think it’s a mistake to imagine this isn’t a well reasoned decision.
They can disagree if they want but as a career-long Microsoft developer they can't fool me that easily. I'm not even complaining, I'm just stating a fact that high-level steering decisions like this are made in Teams meetings between Microsoft employees, not in open discussion with the community. It's the same in .NET, which is a very open source project whose highest-level decisions are, nonetheless, made in Teams meetings between Microsoft employees and then announced to the public. I'm fine with this but let's not kid ourselves about it.
That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).
> The larger open source world doesn't matter for this decision
It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.
To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.
https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...
It would have a substantial risk for the typescript project. Many people would see it as an unwanted and hostile push of a Microsoft technology on the typescript community.
And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.
If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.
csharp single file for a while https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...
Okay, not to be petty here but, it's important to note that on his GitHub he did not star the dotnet repository but has starred multiple go repos and multiple other c++ and TS repos
Modern C# (.NET Core and newer) works perfectly fine on Linux.
> And C# is not considered a high quality runtime once you leave Windows.
By who?
Usually by someone who hasn't used C# since 2015 (when this opinion was fairly valid)
It’s always the same response, c# was crappy but it’s not crappy anymore. Well guess what, Go has been not crappy for a lot longer than C# has been not crappy, maybe that’s part of the reason people like it more.
.NET executables requires a runtime environment to be installed.
Go executables do not.
TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.
Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.
FYI this hasn’t been the case with C# for a very long time now.
.NET has been able to build a self contained single file executable for both the JIT and AOT target for a quite some time. Java also does not require the user to install a runtime. JLink and JPackage have both been around for a long time.
C# AOT filesizes are huge compared to Go.
Do you have data backing that up? Per https://github.com/MichalStrehovsky/sizegame:
C#: 945 kB Go: 2174 kB
Both are EXEs you just copy to the machine, no separate runtime needed, talks directly to the OS.
Maybe some other runtimes do this or it has been changed, but in the past self-contained singe-file .NET deployment just meant that it rolled all the files up during publishing and when you run it, it extracted them to a folder. Not really like a single statically linked executable.
You can indeed produce a compiled native executable with minimal bloat: https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
It hasn't done that in years.
I personally find Go miles easier than Rust.
Is this the ultimate reason,Go is fast enough without being overally difficult. I'm humbly open to being wrong.
While I'm here, any reason Microsoft isn't sponsoring a solid open source game engine.
Even a bit of support for Godot's C#( help them get it working on web), would be great.
Even better would be a full C# engine with support for web assembly.
https://github.com/godotengine/godot/issues/70796
> Even a bit of support for Godot's C#( help them get it working on web), would be great.
They did that. https://godotengine.org/article/introducing-csharp-godot/
At least some initial grant to get it started.
Getting C# working on web would be an amazing. It is already on the roadmap but some sponsorship would help tremendously for sure.
Ok. Credit where credit is due, but considering the sheer value of having the next general of programmers comfortable with .net, Microsoft *should* chip in more.
Hasn't Microsoft largely hitched their horse to Go these days, though (not just this project)? They even maintain their own Go compiler: https://github.com/microsoft/go
It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.
We're talking about a nominal amount of funding to effectively train 10s of thousands of developers.
I think Microsoft can find the money if they wanted to.
I'm sure Microsoft could find the money to do a lot of different things. But why that instead of the infinite alternatives that the money could be spent on instead?
History has shown Microsoft abandoning any gamedev toolkit or sdk they “support”. Managed DirectX, XNA, etc.
Personally, I would like them to never touch the game dev side of the market.
"any reason Microsoft isn't sponsoring a solid open source game engine"
I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers
While I'm dreaming of things that will never ever happen, I would absolutely love for them to buy the game engine side of Unity and open source it.
Unless I missed Unity sorting a ton of stuff out, I assume they're going to have to sell themselves off for parts at some point after the runtime fee fiasco that was supposed to make them profitable lead to developers being angry or outright leaving the ecosystem. My assumption if that happens unless the DOJ gets involved for some reason is MS buys it for this reason.
> we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?
Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.
It seems like, without mentioning any language by name, this answers "why not Rust" better than "why not C#."
I don't think Go is a bad choice, though!
I find the discussion about the choice quite interesting, and many points are very convincing (like the GC one). But I am a bit confused about the comparison between Go and C#. Both should meet most of the criteria like GC, control over memory layout/allocation and good support for concurrency. I'm curious what the weaknesses of C# for this particular use case were that lead to the decision for Go.
Anders is answering this in the Video. Go is the lower level and also closer to Javascript's programming style. They didn't want to fully object oriented for this project.
C# is fine. But last I checked, the AOT compilation generates a bunch of .dll files, which are not suitable for a CLI program like Go's zero dependencies binary.
C# can create single-binary executables, even without native AOT.
They are still going to significant bigger than the equivalent golang binary because of the huge .NET runtime, no?
https://github.com/MichalStrehovsky/sizegame
C#: 945 kB
Go: 2174 kB
Since this is just Hello World, then TinyGo: 644kB
Is this a fair comparison, won't doing anything more significant than `print` in C# require a .NET framework to be installed (200MB+)?
No. This is normal native compilation mode. As you reference more features from either the standard library or the dependencies, the size of the binary will grow (sometimes marginally, sometimes substantially if you are heavily using struct generics with virtual members), but on average it should be more scalable than Go’s compilation model. Even JIT-based single-file binaries, with trimming, take about ~13-40 MB depending on the task. The runtime itself AFAIK, if installed separately, is below 100MB (installing full SDK takes more space, which is a given).
Spending ages slamming your head on your keyboard because you get a dll error or similar running a .NET app and just can't find the correct runtime version / download is a great past time.
then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work
yeh, great ecosystem
at least a Go binary runs 99.99999% of the time when you start it.
Depends on how well trimming works. It's probably still larger than Go even with trimming, but Go also has a runtime and won't produce tiny binaries.
You can choose how the linking process is done, just like you can chose to have a a Go binary with dependencies.
C# has an option to publish to a single self-contained file.
It would be big enough that people would find it annoying (unless using AOT which is hard).
So when can we expect Go support in Visual Studio? I am sold by Anders' explanation that Go is the lowest language you can use that has garbage collection!
You can also have GC in C++ and generate even faster code.
Personally, I want to know why Go was chosen instead of Zig. I think Zig is really more WASM-friendly than Go, and it's much more similar to JavaScript than Rust is.
Memory management? Or a stricter type system?
Zig isn't memory safe, has regular breaking changes, and doesn't have a garbage collector.
For being production-ready?
Thanks for the thoughtful response!
[flagged]
Go is quite difficult to embed in other applications due to the runtime.
What do you see as the future for use cases where the typescript compiler is embedded in other projects? (Eg. Deno, Jupyter kernels, etc.)
There’s some talk of an inter process api, but vague hand waving here about technical details. What’s the vision?
In TS7 will you be able to embed the compiler? Or is that not supported?
Go has buildmode=c-shared, which compiles your program to a C-style shared library with C ABI exports. Any first call into your functions initializes the runtime transparently. It's pretty seamless and automatic, and it'll perform better than embedding a WASM engine.
We are sure there will be a way to embed via something like WebAssembly, but the goal is to start from the IPC layer (similar to LSP), and then explore how possible it will be to integrate at a tighter level.
Golang is actually pretty easy to embed into JS/TS via wasm. See esbuild.
Esbuild is distributed as a series of native executables that are selectively installed by looking at arch and platform. Although you can build esbuild in wasm (and that's what you use when you run it in the browser), what you actually run from .bin in the CLI is a native executable, not wasm.
Why embed it if you can run a process alongside yours and use efficient IPC? I suppose the compiler code should not be in some tight loop where an IPC boundary would be a noticeable slowdown. Compilation occurs relatively rarely, compared to running the compiled code, in things like Node / Deno / Bun / Jupyter. LSPs use this model with a pretty wasteful XML IPC, and they don't seem to feel slow.
Because running a parallel process is often difficult. In most cases, the question becomes:
So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?
Not vaguely. Not hand wave "just launch it". How exactly do you do it?
How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
So now each kernel process has to manage another process, which it talks to via IPC?
...
Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
> So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC?
Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)
> How do you shut it down when the 'host' process dies?
From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.
However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.
> How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.
On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.
If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.
> How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.
It will not conflict with other applications that do the same thing.
> When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
> So now each kernel process has to manage another process, which it talks to via IPC?
Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.
Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.
> Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.
I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.
Thanks for chiming in with these details, but I would just like to say:
> It will not conflict with other applications that do the same thing.
It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
IPC is, by definition intended to be accessible by other processes.
Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
However, you'd have to rely on that mechanism being built into the typescript compiler service.
...ie. it's a bit complicated right?
Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
> Not conflicting is not a property of parallel binary deployment
I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.
> IPC is, by definition intended to be accessible by other processes
Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.
A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.
This seems like an oddly specific take on this topic.
You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.
My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.
You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.
I don't think arguing otherwise is defensible or reasonable.
Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.
I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.
TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.
It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.
I've had to worry about it.
I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.
> It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.
There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)
> For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
> Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
> IPC is, by definition intended to be accessible by other processes.
Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)
> Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
> However, you'd have to rely on that mechanism being built into the typescript compiler service.
> ...ie. it's a bit complicated right?
> Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.
(edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)
That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.
And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.
Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.
[1]: https://github.com/evanw/esbuild/blob/main/cmd/esbuild/stdio...
[2]: https://github.com/microsoft/TypeScript/wiki/Standalone-Serv...
Javascript is also quite difficult to embed in other applications. So not much has changed, except it's no longer your language of choice.
TypeScript compiles to JavaScript. It means both `tsc` and the TS program can share the same platform today.
With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.
This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.
This is awesome. Thanks to you and all the TypeScript team for the work they put on this project! Also, nice to see you here, engaging with the community.
Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:
- porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.
- porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.
- a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.
This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.
None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!
Not Daniel, but I've ported a typechecker from PHP to Rust (with some functional changes) and also tried working with the official Hack OCaml-based typechecker (a precursor to Flow).
Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.
Lifetimes and ownership are not trivial topics to grasp, and they add overhead (as discussed here: https://github.com/microsoft/typescript-go/discussions/411) that not all contributors might grasp immediately.
I am curious why dotnet was not considered - it should run everywhere Go does with added NativeAoT too, so I am especially curious given the folks involved ;)
(FWIW, It must have been a very well thought out rationale.)
Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?
I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)
Discussion and video link here for anyone else interested: https://github.com/microsoft/typescript-go/discussions/411#d...
And lightly edited transcript here: https://github.com/microsoft/typescript-go/discussions/411#d...
It was considered and tested, just not used in the end.
Well-optimized JavaScript can get to within about 1.5x the performance of C++ - something we have experience with having developed a full game engine in JavaScript [1]. Why is the TypeScript team moving to an entirely different technology instead of working on optimizing the existing TS/JS codebase?
[1] https://www.construct.net/en
Well-optimized JavaScript can, if you jump through hoops like avoiding object creation and storing your data in `Uint8Array`s. But idiomatic, maintainable JS simply can't (except in microbenchmarks where allocations and memory layout aren't yet concerns).
In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.
I'd say that our JS game engine codebase is generally idiomatic, maintainable JS. We don't really do anything too esoteric to get maximum performance - modern JS engines are phenomenal at optimizing idiomatic code. The best JS performance advice is to basically treat it like a statically typed language (no dynamically-shaped objects etc) - and TS takes care of that for you. I suppose a compiler is a very different use case and may do things like lean on the GC more, but modern JS GCs are also amazing.
Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.
Note that games are based on main loops + events, for which JITs are optimized, while compilers are typically single run-to-completion, for which JITs aren't.
So this might be a very different performance profile.
*edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.
In other words you write asm.js, which is a textual form of WebAssembly that is also valid Javascript, and if your browser has an asm.js JIT compiler - which it doesn't because it was replaced by WebAssembly.
Our best estimate for how much faster the Go code is (in this situation) than the equivalent TS is ~3.5x
In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I used to work on compilers & JITs, and 100% this — polymorphic calls is the killer of JIT performance, which is why something native is preferable to something that JIT compiles.
Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)
> If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.
"Well optimized Javascript", and more generally, "well-optimized code for a JIT/optimizer for language X", is a subset of language X, is an undefined subset of language X, is a moving subset of language X that is moving in ways unrelated to your project, is actually multiple such subsets at a minimum one per JIT and arguably one per version of JIT compilers, and is generally a subset of language X that is extremely complicated (e.g., you can lose optimization if your arrays grow in certain ways, or you can non-locally deoptimize vast swathes of your code because one function call in one location happened to do one thing the JIT can't handle and it had to despecialize everything touching it as a result) such that trying to keep a lot of developers in sync with the requirements on a large project is essentially infeasible.
None of these things say "this is a good way to build a large compiler suite that we're building for performance".
Please note that compilers and game engines have extremely different needs and performance characteristics—and also that statements like "about 1.5x the performance of C++" are virtually meaningless out-of-context. I feel we've long passed this type of performance discussion by and could do with more nuanced and specific discussions.
Why is the TypeScript team moving to an entirely different technology
A few things mentioned in an interview:
Cannot build native binaries from TypeScript
Cannot as easily take advantage of concurrency in TypeScript
Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.
The message I hear is: don't use JS, don't use async. Music to my ears.
All Go is async though.
Who wants to spend all their time hand-tuning JS/TS when you can write the same code in Go, spend no time at all optimizing it, and get 10x better results?
What kind of C++ and what kind of JS?
- C++ with thousands of tiny objects and virtual function calls? - JavaScript where data is stored in large Int32Array and does operations on it like a VM?
If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.
While Go can be considered entirely different technology, I'd argue that Go is easy enough to understand for the vast majority of software developers that it's not too difficult to learn.
(disclaimer: I am a biased Go fan)
It had been very explicitly designed with this goal. The idea was to make a simpler Java which is as easy as possible to deploy and as fast as possible to commute and by these measures is a resounding success.
Does "well-optimized JavaScript" mean "you can't use Objects"?
In JavaScript, you can't even put 8M keys in a Hashmap; inserts take > 1 second per element:
https://issues.chromium.org/issues/42202799
Well-optimized JS isn't the only point of operation here. There's a LOT of exchange, parsing and processing that interacts with the File System and the JS engine itself. It isn't just a matter of loading a JS library and letting it do its thing. Every call that crosses the boundaries from JS runtime to the underlying host environment has a cost. This is multiplied across potentially many thousands of files.
Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.
Sometimes, the time required to optimize is greater than the time required to rewrite.
Are you comparing perfectly written JS to poorly written C++?
Numeric code can, but compilers have to do a lot of string manipulation which is almost impossible to optimise well in JS.
It sounds like the C++ is not well-optimized then?
How does that scale with number of threads?
I'm not sure how it is in Construct, but IME "well-optimized" JavaScript quickly becomes very difficult to read, debug, and update, because you're relying heavily on runtime implementation quirks and micro-optimizations that make a hash of code cleanliness. Even you can hit close to native performance, the native equivalent usually has much more idiomatic code. The tsc team needs to balance performance of the compiler against keeping the codebase maintainable, which is especially vital for such a core piece of web infrastructure as TypeScript.
Your JS code is way uglier than their Go code, if you're doing those kinds of shenanigans.
JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).
Why not AOT compiled C#, given the team's historical background?
There is an interview with Anders Hejlsberg here: https://www.youtube.com/watch?v=ZlGza4oIleY
The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.
https://www.youtube.com/watch?v=10qowKUW82U
[19:14] why not C#?
Dimitri: Was C# considered?
Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)
>C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#
They could have used static classes in C#.
he went into more detail about C# in this one: https://youtu.be/10qowKUW82U?t=1154s
He says:
- C# Ahead of Time compiler doesn't target all the platforms they want.
- C# Ahead of Time compiler hasn't been stressed in production as many years as Go.
- The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.
- Go has better control of low level memory layouts.
- Go was ultimately the path of least resistance.
I am holding out hope for NativeAOT-LLVM https://github.com/dotnet/runtimelab/tree/feature/NativeAOT-...
Anders explained his reasoning in this interview (transcript):
https://github.com/microsoft/typescript-go/discussions/411#d...
this is the "official" response at this point, since it is in the FAQ linked in the OP
I'm not involved in the decisions, but don't C# applications have a higher startup time and memory usage? These are important considerations for a compiler like this that needs to start up and run fast in e.g. new CI/CD boxes.
For a daemon like an LSP I reckon C# would've worked.
Yes, in fact that's one of the main reasons given in the two linked interviews: Go can generate "real" native executables for all the platforms they want to support. One of the other reasons is (paraphrasing) that it's easier to port the existing mostly functional JS code to Go than to C#, which has a much more OOP style.
The C# compiler is written in C# and distributed to multiple platforms. Along with the JIT that runs on all kinds of devices.
Graph for the differences in Runtime, Runtime Trimmed, and AOT .NET.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/n...
Native AOT exists, and C# has many C++ like capabilities, so not at all.
It exists but isn’t the same as a natively compiled binary. A lot gets packed into an AOT binary for it to work. Longer startup times, more memory, etc.
Just like Go, there is no magic here.
Where do you think Go gets those chubby static linked executables from?
That people have to apply UPX on top.
Go’s static binaries are orders of magnitude smaller than .Net’s static binaries. However, you are right, all binaries have some bloat in order to make them executable.
This is flat out incorrect if you are doing AOT in C#
Not when compiled by NativeAOT. It also produces smaller binaries than Go and has better per-dependency scalability (due to metadata compression, pointer-rich section dehydration and stronger reachability analysis). This also means you can use F# too for this instead, which is excellent for langdev (provided you don't use printf "%A" which is incompatible which is a small sacrifice).
What is the cross compilation support for NativeAOT though? This is one of the things that Go shines (as long as you don't use CGO, that seems perfectly plausible in this project), and while I don't think it would be a deal breaker it probably makes things a lot easier.
What is the state of WASM support in Go though? :)
I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.
Seeing that Hejlsberg started out with Turbo Pascal and Delphi, and that Go also has a lot of Pascal-family heritage, he might hold some sympathy for Go as well...
Yes there is that irony, however when these kind of decisions are made, by folks with historical roots on how .NET and C# came to be, then .NET team cannot wonder why .NET keeps lagging adoption versus other ecosystems, on companies that aren't traditional Microsoft shops.
Not involved, but there's a faq in their repo, and this answers your question, perhaps, a bit: https://github.com/microsoft/typescript-go/discussions/411
Thanks, but it really doesn't clarify why a team with roots on the .NET ecosystem decided C#/Native AOT isn't fit for purpose.
Pure speculation, but C# is not nearly the first class citizen that go binaries are when you look at all possible deployment targets. The “new” Microsoft likely has some built-in bias against “embrace and extend” architectural and business decisions for developers. Overall this doesn’t seem like a hard choice to me.
Cue rust devotees in 3, 2, ..
> Cue rust devotees in 3, 2, ..
If you are a rust devotee, you can use https://github.com/FractalFir/rustc_codegen_clr to compile your rust code to the same .NET runtime as C#. The project is still in the works but support is said to be about 95% complete.
I don't understand what Anders' past involvement with C# has to do with this. Would the technical evaluation be different if done by Anders vs someone else?
C# and Go are direct competitors and the advantages of Go that were cited are all features of C# as well, except the lack of top level functions. That's clearly not an actual problem: you can just define a class per file and make every method static, if that's how you like to code. It doesn't require any restructuring of your codebase. There's also no meaningful difference in platform support, .NET AOT supports Win/Mac/Linux on AMD64/ARM i.e. every platform a developer might use.
He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.
I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.
Yea, I came here to say the same thing. Anders' reasons for not going with C# all seem either dubious or superficial and easily worked around.
First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.
Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.
As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.
Thanks, this is a good way to frame it, someone else also phrased similar sentiment which I'm in total agreement with: https://x.com/Lon/status/1899527659308429333
It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.
I wonder if there's just some cultural / generational stuff happening there too. The fact that the TS compiler is all about compiling a highly complex OOP/functional hybrid language yet is said to use neither objects nor FP seems rather telling. Hejlsberg is famous for designing object oriented languages (Delphi, C#) but the Delphi compiler itself was written largely in assembly, and the C# compiler was for a very long time written in C++ iirc. It's possible that he just doesn't personally like working in the sort of languages he gets paid to design.
There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.
> Roslyn
It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.
> And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.
NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).
In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.
> In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding
no? https://github.com/microsoft/typescript-go/graphs/contributo...
Ah, I see. Thanks for the clarification. Well, doubly unfortunate then. I wonder if we'll ever know what happened behind the scenes.
Ah C# 6 not .NET 6, thanks for the correction. Cool to hear that the NativeAOT stuff follows the same path.
Yes, when the author of the language feels it is unfit for purpose, it is a different marketing message than a random dude on the Internet on his new startup project.
Link to interview with Anders. (linked from the thread as well) https://www.youtube.com/watch?v=10qowKUW82U&t=1154s
I write a lot of Go and a decent amount of TypeScript. Was there anything you found during this project that you found particularly helpful/nice in Go, vs. TypeScript? Or was there anything about Go that increased the difficulty or required a change of approach?
I'd be curious to hear about the politics and behinds the scenes of this project. How did you get buy-in? What were some of the sticking points in getting this project off of the ground? When you mention that many other languages were used to spike the new compiler, were there interesting learnings?
I feel like you'll need to provide a wasm binary for browser environments and maybe as a fallback in node itself. Last time I checked, Go really struggles to perform when targeting wasm. This might be the only reason I'd like to see it in Rust but I'm still glad you went with Go.
Are there any insights on the platform decision?
Honestly, the choice seems fine to me: the vast majority of users are not compiling huge TypeScript projects in the browser. If you're using Vite/ESBuild, you're already using a Go-based JS toolchain, and last I checked Vite was pretty darn popular. I don't suspect there will be a huge burden for things like playground; given the general performance uplift that the Go tsc implementation already gets, it may in fact be faster even after paying the Wasm tax. (And even if it isn't, it should be more than fine for playground anyways.)
I‘m pretty sure that a lot of vite users with hot reload will run tsc inside the browser (tanstack, react-router)
I am not a Vite expert, however, when running Vite in dev mode, I can see two things:
- There is an esbuild process running in the background.
- If I look at the JavaScript returned to the browser, it is transpiled without any types present.
So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.
I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.
> While we’re not yet feature-complete
This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering
It’s fine, if it’s 2x faster after being feature complete, I don’t really mind. It still is a free speedup to all existing code-bases. Developers don’t need to anything than install the latest version of TypeScript I presume
Thanks for answering questions.
One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?
Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?
Edit: Clarified to show that I indicate updating the original compiler.
It's unlikely that you would get much performance benefit from AOT compiling a TypeScript codebase. (At least not with a ton of manual optimization of the native code, and if you're going to do that, why not just rewrite in a native-first language?)
JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but
(a) nobody's done that
(b) it still wouldn't be as fast as native, or much faster than JIT
(c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()
So basically, TypeScript as a language doesn't allow compiling to as as efficient machine code as Golang? (Edit) And I assume it's not practical to alter the language in a way that this kind of information can be added. (Such as adding a typed version of JSON.parse()).
Amazing news, but I'm wondering what will happen to Monaco editor and all the SaaS that use typescript in the browser?
Not sure if it does but the video linked in the post might answer your question? I think he is compiling vscode which includes Monaco editor which is where they are getting 10x faster stat. (I might be wrong here.) [0]
[0] https://youtu.be/pNlq-EVld70?feature=shared&t=112
Yeah I saw that but will they maintain a browser compatible version is another question
Ah, inception compiling. The issue isn't compiling the Monaco editor, but rather will the Monaco editor compile TypeScript 7 in the browser?
That is a good question.
This might be an oddly specific question, but do you think performance improvements like this might eventually lead to features like partial type argument inference in generics? If I recall correctly off the top of my head, performance was one of the main reasons it was never implemented.
thank you, to both of you, for so many years of groundbreaking work. you've both been on the project for, what, 11 years now? such legends.
> You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.
Will the questions and answers be posted anywhere outside of Discord after it's concluded?
Daniel, please make this a priority. Post the Q&A transcript to GitHub, at least.
Since the new tsc is written in go, will I be able to pull it into my go web server as a middleware to dynamically transpile ts?
We'll be working on an API that ideally can be used through any language - that would be our preferred means of consuming the new codebase.
Will we still have compiler plugins? What will this mean for projects like ts-patch?
What is the forward paths available for efforts like the TS Playground under Typescript 7 (native)?
One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?
We anticipate that we will eventually get a playground working on the new native codebase. We know we'll likely compile down to WebAssembly, but a lot of how it gets integrated will depend on what the API looks like. We're currently giving a lot of thought to that, but we have good ideas. https://github.com/microsoft/typescript-go/discussions/455
Will this be a prerequisite of the 7.0 release?
This is very exciting! I'm curious if this move eventually unlocks features that have been deemed too expensive/slow so far, e.g. typing `ReactElement` more accurately, typing `TemplateStringsArray` etc
I'm curious about the choice of Go to develop the new toolchain. Was the support for parallelism/concurrency a factor in the decision?
Is 10x a starting point or could we expect even more improvements in the future?
Hi Daniel! What's your stance on support for yarn pnp?
pnp is still very cool, and it would be great if we can find a better API story that works well with pnp!
Congrat on the announcement, this is a great achievement!
Amazing!! I did not see timing. When might we see in VS Code? Edge?
Daniel, congrats! I'm _so_ excited about everything y'all have achieved in the last few years.
When can we just replace the JS runtime with TS and skip the compiler altogether? Start fresh, if you will.
Why Go?
Will the refactor possibly be an occasion for ironing out a spec
Your patience with Michael Saboff is incredible.
[flagged]
> inexpressive type system
Simplicity is a feature, not a bug. Overly expressive languages become nightmares to work with and reason about (see: C++ templates)
Go's compilation times are also extremely fast compared to Rust, which is a non-negligible cost when iterating on large projects.
Have you considered a closer to metal language to implement the compiler in like c or rust ? Have you evaluated further perf improvements ?
I don't think c or rust are really 'closer to the metal' than golang (what they're using)
Considering Go is the only language with a garbage collector out of the three languages you mentioned, I'm not sure how you reach the conclusion they're all as close to the metal.
C and Rust both have predictable memory behaviour, Go does not.
When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.
(IE, as opposed to reference counting, where if you have cyclic loops, you need to manually go in and "break" the loop so memory gets reclaimed.)
> When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.
It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/
You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.
There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).
I get the impression they'd use smart pointers (C++) or Rc/Arc (Rust)
Go isn't that bad in terms of memory predictability to be honest. It generally has roughly 100% overhead in terms of memory usage compared to no GC. This can be reduced by using GOGC env variable, at the cost of worse performance if not careful.
Hi Daniel!
Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.
My question has to do with Ryan's statement:
> We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript
I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.
Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?
> as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem
Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...
Now, it pales in comparison to Python, but still...
I'm no more thrilled than you at the cost of running ESLint, but using a high-level language doesn't need to mean being wasteful of resources.
TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft
Go is an extremely strange choice, given the ecosystem you're targeting. I've got quite a bit of experience in it, TS, Rust and C++. I'd pick any of those for productivity and (in the case of C++ and Rust, thread-safety) over Go, simply because Go's type system is so impoverished.
From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.
Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)
[dead]
Fast dev tools are awesome and I am glad the TS team is thinking deeply about dev experience, as always!
One trade off is if the code for TS is no longer written in TS, that means the core team won’t be dogfooding TS day in and day out anymore, which might hurt devx in the long run. This is one of the failure modes that hurt Flow (written in OCaml), IMO. Curious how the team is thinking about this.
Hey bcherny! Yes, dog-fooding (self-hosting) has definitely been a huge part in making TypeScript's development experience as good as it is. The upside is the breadth of tests and infrastructure we've already put together to watch out for regressions. Still, to supplement this I think we will definitely be leaning a lot on developer feedback and will need to write more TypeScript that may not be in a compiler or language service codebase. :D
Interesting! This sounds like a surprisingly hard problem to me, from what I've seen of other infra teams.
Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?
Ultimately the solution has to be breaking the browser monopoly on JS, via performance parity of WASM or some other route, so that developers can dogfood in performant languages instead across all their tooling, front end, and back end.
First, this thread and article have nothing to do with language and/or application execution performance. It is only about the tsc compiler execution time.
Second, JavaScript already executes quickly. Aside from arithmetic operations it has now reached performance parity to Java and highly optimized JavaScript (typed arrays and an understanding of data access from arrays and objects in memory) can come within 1.5x execution speed of C++. At this point all the slowness of JavaScript is related to things other than code execution, such as: garbage collection, unnecessary framework code bloat, and poorly written code.
That being said it isn't realistic to expect measurably significant faster execution times by replacing JavaScript with a WASM runtime. This is more true after considering that many performance problems with JavaScript in the wild are human problems more than technology problems.
Third, WASM has nothing to do with JavaScript, according to its originators and maintainers. WASM was never created to compete, replace, modify, or influence JavaScript. WASM was created as a language ubiquitous Flash replacement in a sandbox. Since WASM executes in an agnostic sandbox the cost to replace an existing runtime is high since an existing run time is already available but a WASM runtime is more akin to installing a desktop application for first time run.
How do you reconcile this view with the fact that the typescript team rewrote the compiler in Go and it got 10x faster? Do you think that they could have kept in in typescript and achieved similar performance but they didn't for some reason?
This was touched on in the video a little bit—essentially, the TypeScript codebase has a lot of polymorphic function calls, and so is generally hard to JIT optimize. JS to Go therefore yielded a direct ~3.5x improvement.
The rest of the 10x comes from multi-threading, which wasn't possible to do in a simple way in the JS compiler (efficient multithreading while writing idiomatic code is hard in JS).
JavaScript is very fast for single-threaded programs with monomorphic functions, but in the TypeScript compiler's case, the polymorphic functions and opportunity for parallelization mean that Go is substantially faster while keeping the same overall program structure.
I have no idea about the details of their test cases. If they had used an even faster language like Cobol or Fortran maybe they could have gotten it 1,000,000x faster.
What I do know is that some people complain about long compile times in their code that can last up to 10 minutes. I had a personal application that was greater than 60k lines of code and the tsc compiler would compile it in about 13 seconds on my super old computer. SWC would compile it in about 2.5 seconds. This tells me the far greater opportunity for performance improvement is not in modifying the compiler but in modifying the application instance.
Very short, succinct and informative comment. Thank you.
Are you looking for non-browser performance such as 3d? I see no case that another language is going to bring performance to the DOM. You'd have to be rendering straight to canvas/webgl for me to believe any of this.
They should write a typescript-to-go transpiler (in typescript) , so that they can write their compiler in typescript and use typescript to transpile it to go.
The issue with Flow is that it's slow, flaky and has shifted the entire paradigm multiple times making version upgrades nearly impossible without also updating your dependencies, IF your dependencies adopted the new flow version as well. Otherwise you're SOL.
As a result the amount of libraries that ship flow types has absolutely dwindled over the years, and now typescript has completely taken over.
Our experience is the opposite, we have a pretty large flow typed code base, and can do a full check in <100ms. When we converted to TS (decided not to merged) we saw typescript was in the multiple minute mark. It’s worth checking out LTI and how the typing on boundaries, enables flow to parallelize and give very precise error messages compared to TS. The third party lib support is however basically dead, except the latest versions of flow are starting to enable ingestion of TS types, so that’s interesting.
I notice this time and time again: projects start with a flexible scripting language and a promise that the performance will be sufficient. I mean, JS is pretty performant as scripting languages go and it is hard to think of any language runtimes that get more attention than the browser VMs. And generally, 90% of the things people do will run sufficiently fast in that VM.
Yet projects inevitably get to the stage where a more native representation wins out. I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
It makes me think I should be starting any project I have in the lowest level representation that allows me some ergonomics. Maybe more reason to lean into Zig? I don't mean for places where something like Rust would be appropriate. I mean for anything I would consider using a "good enough" scripting language.
It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun). I mean, the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me. And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go. And with LLMs and language servers, there is decreasing utility in familiarity with a language to be productive.
It feels like the advantages of scripting languages are being eroded away. If I am planning a career "vibe coding" or prompt engineering my way into the future, I wonder how reasonable it would be to assume I'll be doing it to generate lower level code rather than scripts.
> I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...
> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
[1] https://github.com/tc39/proposal-structs?tab=readme-ov-file
I think the Prisma case is a bit of a red herring. First, they are using WASM which itself is a a low-level representation. Second, the performance gains appear primarily in avoiding the marshalling of data from JavaScript into Rust (and back again I presume). Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
> First, they are using WASM which itself is a a low-level representation.
WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.
> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman
> As for the "compilers are special" reasoning, I don't ascribe to it.
Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.
> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]
[1] https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...
I'm not denying the facts of the matter, I am denying the conclusion. The circumstances of the situation are relevant. Marshalling cost across IPC boundaries come into play in every single possible situation regardless of language. It is why shared memory architectures exist. It doesn't matter what language is on the other side of the IPC, if the performance gained by using a separate process is not greater than the cost of the communication then you should avoid the IPC. One way to avoid that cost is to share the memory. In the case of code already running in a JavaScript VM a very easy way to share the memory means you do the processing in JavaScript.
That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.
And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.
The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
The fact that many software products are moving to lower-level languages is not a general point in favour of lower-level languages being somehow better—rather, it simply aligns with general directions of software evolution.
1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.
2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.
Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience. Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption.
(And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)
> The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions.
Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.
> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Arguably Go is a scripting language designed for exactly that purpose.
I wouldn't think choosing a native language over a scripting language is a "gamble" but I suppose that all depends on ability and risk tolerance. I think it would be relatively easy to develop using Rust, Go, Zig, etc.
I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).
Rather than fixating on this single Prisma example, I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages.
First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.
Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.
This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s
> your larger point which seems to be that all greenfield projects are necessarily best suited to low level language
That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.
So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.
> Low level languages tend to have a higher barrier to entry,
I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
> That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements.
The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.
> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.
I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.
Ah, now we're at the dictionary definition level. So let's check Google:
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.
edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?
I wrote 362 words on why language rewrites are a faulty indicator of language quality with multiple examples and anecdotes, and you hyper-fixated on the very first sentence of my comment, instead of addressing the substance of my argument. In what alternate universe is that a good faith argument? If you were truly arguing in good faith you'd restate your position in whichever way you'd like your argument represented, and then proceed to respond to something besides the first sentence.
> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?
If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.
i don't think this speaks to the general reasons someone would rewrite a mid- or low-level project in a high-level language, so much as to the special treatment JS/TS get. yes, your data model being the default supported, and everything else in the world having to serialize/deserialize to accommodate that, slows performance. in other words, this is just a reason to use the natively-supported JS/TS, still very much the favorite children of browser engines, over the still sort of hacked-in Rust.
I think it's smart to start with a high level language which should reduce development time, prove the worth of the application, then switch to a lower level language later.
What was that saying again? Premature optimisation is the root of all evil
https://news.ycombinator.com/item?id=29228427
A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.
> I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
Software never gets rewritten in a higher level language, but software is constantly replaced by alternatives. First example that comes to mind is Discord, an Electron app that immediately and permanently killed every other voice client on the market when it launched.
Yes, scripting replacements often usurp existing ossified alternatives. And there is some truth that a higher level language gave some leverage to the developers. That is why I mentioned the advent of LLM based coding assistants and how this may level the playing field.
If we assume that coding assistants continue to improve as they have been and we also assume that they are able to generate lower level code on par with higher level code, then it seems the leverage shifts away from "easy to implement features" languages to "fast in most contexts" languages.
Only time will tell, of course. But I wonder if we will see a new wave of replacements from Electron based apps to LLM assisted native apps.
It’s a little more nuanced though — I doubt the audio processing in Discord is written in JavaScript. (But I haven’t looked!)
Isn't most of Discord backend Rust and Go?
> Discord ... immediately and permanently killed every other voice client on the market
Do you mean voice clients like FaceTime, Zoom, Teams, and Slack?
Well, voice clients on PC for casual gamer use :p
They're talking about TeamSpeak, Vent, Mumble, and Skype.
Sure but that comment was mostly about backend. If Discord used js/ts for backend they wouldn't replace anyone.
I don't think the success of Discord is due to it being written in Electron. Or is it?
I game very little these days, but have run mumble, ventrillo and teamspeak in the past and the problem was always the friction in onboarding people onto them, you’d have to exchange host, port, password at best, or worse, explain how to download, install and use.
Discord can run from a browser, making onboarding super easy. The installable app being in Electron makes for minimal (if any) difference between it and the website.
In summary, running in the web browser helps a lot, and Electron makes it very easy for them to keep the browser version first class.
As an added bonus, they can support Linux, Windows and macOS equally well.
I would say it helps as without Electron, serving all the above with equal feature parity just would have been too expensive or slow and perhaps it just wouldn’t have been as frictionless for all types of new users like it is.
Inevitably? Well, the promise of using something less efficient in terms of performance is that it will be more efficient in terms of development. Many times projects fail because they optimize too early and never built the features they needed to or couldn’t iterate fast enough to prove value and die. So if the native version is better but failed, it’s not so inevitable that it will get to that stage.
Right, which is my point about LLM code assistants. If you did have two cases in the past: native but slow to add features so the project eventually dies vs. scripted but performance is bad enough it eventually needs to be rewritten. (Of course, this is a false dichotomy but I'm playing into your scenario).
Now we may have a new case: native but fast to add features using a code assist LLM.
If that new case is a true reflection of the near future (only time will tell) then it makes the case against the scripted solution. If (and only if) you could use a code assist LLM to match the feature efficiency of a scripting language while using a native language, it would seem reasonable to choose that as the starting point.
That’s an interesting idea. It’s amazing how far we’ve come without essentially any objective data on how much these various methodologies (e.g. using a scripting language) improve or worsen development time.
The adoption of AI Code Assistance I am sure will be driven similarly anecdotally, because who has the time or money to actually measure productivity techniques when you can just build a personal set of superstitions that work for you (personally) and sell it? Or put another way, what manager actually would spend money on basic science?
> the lowest level representation that allows me some ergonomics
The ergonomics of compiling your code for every combination of architecture and platform you plan to deploy to? It's not fun. I promise.
> my default assumption to use JS runtimes on the server
AWS Lambda has a minimum billing interval of 1ms. To do anything interesting you have to call other APIs which usually have a minimum latency of 5 to 30ms. You aren't buying much of anything in any scalable environment.
> there is decreasing utility in familiarity with a language to be productive.
I hope you aren't planning on making money from this code. Either way, have fun debugging that!
> the advantages of scripting languages are being eroded away.
As long as scripting languages have interfaces which let them access C libraries either directly or through compiled modules they will have strong advantages. Just having a CLI where you can test out ideas and check performance is massively powerful and I hate not having it in any compiled project. Go has particularly bad ergonomics here as writing test cases are easy but exploring ideas is not due to it's strictness down to even the code styling level.
> It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun).
The JS runtimes are fine for the majority of use cases but the ecosystem is really the issue IMO.
> the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me
I agree and now with OpenAPI this is even less of an argument.
"A sufficient smart compiler..."
The JS `tsc` type checks the entire 1.5 million line VS Code source in 77s (non-incremental). 7s is a lot better and will certainly imrpove DX - which is their goal - but I don't see how that's "insufficient".
The trade-off is that the team will have to start dealing with a lot of separate issues... How do tools like ESLint TS talk to TSC now? How to run this in playground? How to distribute the binaries? And they also lose out on the TS type system, which makes their Go version rely a little more on developer prowess.
This is an easy choice for one of the most fundamental tools underlaying a whole ecosystem, maintained by Microsoft and one of the developers of C# itself, full-time.
Other businesses probably want to focus on actually making money by leading their domain and easing long-term maintenance.
For previous attempts at a faster tsc, but in rust, see:
1. https://github.com/dudykr/stc - Abandoned (https://github.com/swc-project/swc/issues/571#issuecomment-1...)
2. https://github.com/kaleidawave/ezno - In active development. Does not have the goal of 1:1 parity to tsc.
I think Deno and Bun are the two successful attempts at a faster tsc :)
Both Deno and Bun still use current tsc for type checking
They just strip types and don’t do any type checking
The news for me is Microsoft teams relying on Go.
Strange choice to use Go for the compiler instead of C# or F#.
Now if they will have problems, they will depend on the Go team at Google to fix them.
Even though I have my considerations regarding Go, I love that they picked Go instead of the fashion to go Rust that seems to be the norm now.
A compiled managed language is much better approach for userspace applications.
Pity that they didn't go with AOT compiled .NET, though.
> Pity that they didn't go with AOT compiled .NET, though.
Yeah. It seems to be unfashionable somewhat even within Microsoft.
(edit: it seems to be you and me and barely anyone else on HN advocating for C#)
Also, this is surprising because this was presented and led by Anders Hejlsberg, who is the creator of both C# and Typescript.
If anyone should have picked C# it would be him.
Hejlsberg seemed quite negative when it came to cross platform AOT compiled C# in several comments he's made, hinting at problems with both performance and maturity on certain platforms.
Projects like this are needed to improve C#'s cross platform AOT. Missed opportunity IMO.
Absolutely. Go is where it is because of the parent org's commitment; strange that Microsoft is wasting this opportunity.
This was also surprising to me – C# is a really awesome and modern language.
I happened to be doing a lot of C# and .NET dev when all this transition was happening, and it was very cool to be able to run .NET in Linux. C# is a powerful language with great and constantly evolving ideas in it.
But then all the stuff between the runtimes, API surfaces, Core vs Framework, etc all got extremely confusing and off-putting. It was necessary to bring all these ecosystems together, but I wonder if that kept people away for a bit? Not sure.
I think the main thing is that they are porting, not re-writing. Current tsc is functional by nature and that's makes go better fit.
I think the much larger ask C# couldn't answer is the "expressive" access to low level struct layout[1]
[1] https://www.youtube.com/watch?v=10qowKUW82U&t=769s
All Azure contributions to CNCF are using a mix of Go and Rust, mostly.
Here is a kind of weird, given the team.
If I recall in an article from a while back, the idea was originally rust, but the current compiler design had lots of references shared references that would make the port to rust a lot of work.
Personally, Rust only makes sense in scenarios that automatic memory management of any kind is either unwanted, or it is a quixotic battle making the target group think otherwise.
OS kernels, firmware, GPGPU,....
If it is the ML inspired type system, there are plenty of options among compiled managed languages, true Go isn't really on that camp, but whatever.
I'd love a language that is a GC'd like go, but with the ML inspired type system, and still an imperative language. OCaml seems to be the closest thing to Rust in that regard, but it's not imperative.
OCaml has if/else, for loops, whiles, mutations, what are you missing?
There are also Swift, F# (Native AOT), Scala (Native, GraalVM, OpenJ9).
That language is Swift.
Nim is pretty close to that for me. It’s more pascal-ish inherited but has a sophisticated type system including case types similar to ML sum types and compile time.
Rust memory management is automatic. Object destructors run when the object exits scope without needing explicit management by the programmer
More like compiler assisted management, with compiler errors when the developer doesn't follow the teacher.
Or possibly you want to use a language you're familiar with in adjacent spaces (eg tools) or you want to tackle concurrency bugs more directly. There is more to rust than it's
Dealing with references you typically find in a compiler is not a problem for Rust. Arena allocation and indices are your friend.
Flattened asts are also faster than tree/pointer asts.
I wonder if this project can easily be integrated into Deno (built mainly in Rust)?
He also mentioned doing a line-for-line port. Assuming you could somehow manage that, you'd probably end up with something slower than JS (not entirely a joke). I'm a rust fanboy, but have to concede that Go was the be t choice here.
If it was a fresh compiler then the choice would be more difficult.
Hejlsberg discusses the decision not to use C# here: https://www.youtube.com/watch?v=10qowKUW82U&t=1154s
>Pity that they didn't go with AOT compiled .NET, though.
I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
That makes sense if your project has similar constraints and requirements.
I like when Microsoft doesn't pretend that their technologies are the right answer for every problem.
One unrelated team at Microsoft doesn't 'believe' in .NET is enough to make you change direction?
More specifically, the guy who created C# doesn't believe in it (for this particular project).
But, of course, that is not unusual. There is no language in existence that is best suited to every project out there.
They cited code style and porting as reasons to use go over c#, not performance.
I didn't say it was very performance critical, go and c# are both good enough for us in this regard. The problem is that, when evaluating the whole thing, they decided against c#, that is problematic here.
But they not stated it is <because> of C#'s performance, so I don't think this is THAT problematic. But I agree that it would be fine to see them dogfeeding on their language for such a massive project, and a project that is even related to TypeScript (as it inspired it in some features), it is a shame they don't do it, but it is also the case for many of their projects (like, they are even pushing react native for apps nowadays), so I think at some level it's really fine.
> But they not stated it is <because> of C#'s performance
But I just said my point is not about performance at all! It is about the whole package. Performance of c# and go are both enough for my usecase, same for java and c obviously. They just told us that they don't think the whole package makes sense, and disowned the AOT compilation.
But you said: > I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
Which made me naturally think your point was, indeed, about performance. Although as it appears to be, I'm wrong, so it's fair enough.
Also cross platform support
There are some external projects that have tried to port tsc to native. stc[0], for instance, was one. Iirc it started out in Go since it had a more comparable type system (they both use duck typing) making it easier to do one-to-one conversions of code from one language to the other. I’m not totally sure why it ended up pivoting to rust.
[0]: https://github.com/dudykr/stc
> I love that they picked Go instead of the fashion to go Rust
This seems super petty to me. Like, if at the end of the day you get a binary that works on your OS and doesn’t require a runtime, why should you “love” that they picked one language over another? It’s exactly the same outcome for you as a user.
I mean, if you wanted to contribute to the project and you knew go better than rust, that would make sense. But sounds like you just don’t like rust because of… reasons, and you’re just glad to see rust “fail” for their use case.
> that seems to be the norm now.
According to whom?
I was asking myself the same questions.
ah, answers below: https://news.ycombinator.com/item?id=43333296
It's not just a pity, it's very surprising. In my eyes Go is a direct competitor of C#. Whenever you pick Go for a project, C# should have been a serious consideration. Hejlsberg designed C# and that a team that he's an authority figure in would opt to use Go, a language which frankly I would not consider to build a compiler in is astounding.
Not saying that in a judgemental way, I'm just genuinely surprised. What does this say about what Hejlsberg thinks of C# at the moment? I would assume one reason they don't pick C# is because it's deeply unpopular in the open source world. If Microsoft was so successful in making Typescript popular for open source work, why can't they do it for C#?
I have not opted to use C# for anything significant in the past decade or so. I am not 100% sure why, but there's always been something I'd rather use. Whether that's Go, Rust, Ruby or Haskell. I always enjoyed working in C#, I think it's a well designed and powerful language even if it never made the top of my list recently. I never considered that there might be something so fundamentally wrong with it that not even Hejlsberg himself would use it to build a Typescript compiler.
What's wrong with C#?
Anders Hejlsberg explains here: https://youtu.be/10qowKUW82U?t=1154. TL;DW:
- C# is bytecode-first, Go targets native code. While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it. Go also has somewhat better control over data layout. They wanted to get as low-level as possible while still having garbage collection.
- This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style. This suits Go well while a C# port would have required more restructuring.
This is shockingly out-of-date statement by Anders.
I'm not sure what's going on, I guess he's just not involved with the runtime side of .NET at all to actually know where the capability sits circa 2024/2025. But really, it's a terrible situation to be in. Especially just how worse langdev UX in Go is compared to C#, F# or Rust. No one would've batted an eye if either of those was used.
> Especially just how worse langdev UX in Go is compared to C#, F# or Rust.
Can you explain why the DX in Go is "worse"? I've seen the exact opposite during my professional work.
the typing situation in Go is a mess, GADTs are generally a joy to work with, nullability is not.
Lack of optionals/enum/sum types is a huge regression from Typescript to go IMHO.
Honest q, which part is out of date and why? Thanks
Pretty much everything:
> While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it
https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.
> Go also has somewhat better control over data layout
How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.
> This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.
C# has been consistently moving into that direction by taking more and more inspiration from F#.
The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.
Chances are it was just personal preference of the team and decades of arguing about language design have worn out Anders Hejlsberg. I don't think structural typing alone is enough of an argument to justify the choice over Rust. Maybe the TS team thought choosing Go would have better optics. Well, they won't have it both ways because clearly this decision in my opinion is short-sighted and as someone aptly pointed on twitter they will be now beholden to Google's control over Go should they ever need compiler to support a new platform or evolve in a particular way. Something they would've gotten easily with .NET.
On the topic of preference, this thread has really shown me that there is a HUGE preference for a native-aot gc language that is _not_ Go. People want AOT because of the startup and memory characteristics, but do not want to sacrifice language ergonomics. C# could fill that gap if Microsoft would push it there.
Just use the fast GC library in C++.
I don't think C++ has good language ergonomics.
I don't think there is anything faster.
I highly doubt that bolting a GC on to C++ is going to be any faster than the equivalent C# or Java code.
Doubt is human, but it isn’t always warranted. In C++ can use a concurrent, completely pause‐free garbage collector, where the programmer decides which data is managed by the GC. This enables code optimizations in ways that aren’t possible in C# and Java.
You realize that is literally not the same thing? I said equivalent code. The whole reason of using a managed language with GC is to not think about those things because they eat up thought and development time. Of course the language that will let you hand optimize every little line will eventually be more performant. I really think you’re discounting both C#’s ability to do those things and just how good Java’s GCs are. Anyway, thats not the point.
The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.
To correct myself, someone pointed out a commit graph which indicates Anders Hejlsberg's heavy involvement with the ongoing port efforts: https://github.com/microsoft/typescript-go/graphs/contributo...
Isn't the AOT story for F# pretty meh? AOT + System.Text.Json requires source generation as best I can tell, which F# doesn't support yet (to my knowledge).
In complex projects like this, Go requires manual scripting and build-time code generation. Arguably, writing a small shim project in C# is much easier. You don't exactly do a lot of JSON serialization in a compiler either way. Other than that - F# "just works" and does not require anything extra. It is just IL after all.
NativeAOT story itself is also interesting - I noted it in a sibling comment but .NET has much better base binary size and binary size scalability through stronger reachability analysis, metadata compression and pointer-rich binary sections dehydration at a small startup cost (it's still in the same ballpark). The compiler output is also better and so is whole program view driven devirtualization, something Go does not have. In the last 4 years, .NET's performance has improved more than Go's in the last 8. It is really good at text processing at both low and high level (only losing to Rust).
The most important part here is that TypeScript at Microsoft is a "first-party" customer. This means if they need additional compiler accommodations to improve their project experience from .NET, they could just raise it and they will be treated with priority.
This decision is technically and politically unsound at multiple levels at once. For example, they will need good WASM support. .NET's existing WASM support is considered "decent" and even that one is far from stellar, yet considered ahead of the Go one. All they needed was to allocate additional funding for the ongoing already working NativeAOT-LLVM-WASM prototype to very quickly get the full support of the target they needed. But alas.
I already hinted on BlueSky that they shouldn't wonder why .NET has adoption problems outside the traditional Windows ecosystem, when decisions like these are taken.
The nightmare of Midori never ends. And especially right as the platform, from the technical standpoint, is getting really good(tm).
C# has become a poor jack of all trades, trying to be Java, Go and F# at the same time and actually being a shity poor version of all of them. On top of that .NET has become a very enterprisey bloatware. In all honesty, I'm not surprised that they went with Go, as it has a clear identity, a clear use-case which it caters for extremely well and doesn't lose focus with trying to be too many other unrelated things at the same time.
Maybe it's time to stop eating everything that Microsoft sales folks/evangelists spoon feed you and wake up to the fact that only because people paid by Microsoft to roll the drum about Microsoft products telling you that .NET and C# is oh so good and the best in everything, maybe it's not actually that credible?
Look at the hard facts. Every single product which Microsoft has built that actually matters (e.g. all their Azure CNCF stuff, Dapr, now this) is using non Microsoft languages and technologies.
You won't see Blazor being used by Microsoft or the 73rd reinvention of ASP.NET Core MVC Minimal APIs Razor Pages Hocus Pocus WCF XAML Enterprise (TM) for anything mission critical.
If not for Microsoft's backing, C# would have died a long time ago. It's just another D, but with a lot more money behind it. It had its chance/momentum, but it failed, and its time has passed. Resurrecting the language now would be very difficult.
It seems it's because AOT is a bit of a second fiddle in the dotnet ecosystem and native is a top priority for their case. After hearing the reasoning ( https://youtu.be/ZlGza4oIleY?si=1GKSX61AF20VQr-G&t=1000 ) I don't blame them for choosing Go.
C# needs an interpreter (.NET runtime) while Go compiles down to a binary. And the toolchain allows you to compile for other architectures fairly easily.
So that could be a fundamental reason why.
The grand parent was talking about AOT.
.NET has AOT compilation now. There really is no excuse, especially when you consider that C# has a pretty decent type system and Go has an ad-hoc, informally specified, bug-ridden, slow implementation of half of a decent type system.
> Go has an ad-hoc, informally specified, bug-ridden, slow implementation of half of a decent type system.
It's not lost on me that this is a widely used aphorism. The problem is that it's not true in any way shape or form.
It absolutely is... Go's type system is an abomination.
Let's not assume C#'s type system is THAT much better, it is also a mess in dozens of cases and is hardly pleasant from a DX standpoint.
People using pointers when they want to hack in null values points towards a problem in Go's type system.
Just tried it on our codebase. Getting over a thousand errors, a good portion of which seem to be:
Probably an easy fix.Running it in another portion results in SIGSEGV with a bad/nil pointer defererence, which puts me in the camp of people questioning the choice of Go.
If you are wondering why not Rust instead of Go, they outline why Rust was not chosen. This is a port not a reimplementation. Many of the data structures can not easily be ported to Rust, such as Nodes with cyclic dependencies. Check the longer interview here: https://www.youtube.com/watch?v=10qowKUW82U&ab_channel=Michi... Also, I think the discussion on esbuild's choice of language applies here as well as it has a large similarity. You can find it here on hn
> By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
--https://github.com/microsoft/typescript-go/discussions/411
I haven't looked at the tsc codebase. I do currently use Golang at my job and have used TypeScript at a previous job several years ago.
I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript. Notably, sum types are commonly called out as something especially useful in writing compilers, and when I've wanted them in Golang I've struggled to replace them.
Is there something special about the existing tsc codebase, or does the statement about idiomatic Golang resembling the existing codebase something you could say about most TypeScript codebases?
> I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript.
To be fair, they didn't actually say that. What they said was that idiomatic Go resembles their existing patterns. I'd imagine what they mean by that is that a port from their existing patterns to Go is much closer to a mechanical 1:1 process than a port to Rust or C#. Rust is the obvious choice for a fully greenfield implementation, but reorganizing around idiomatic Rust patterns would be much harder for most programs that are not already written in a compatible style. e.g. For Rust programs, the precise ownership and transfer of memory needs to be modelled, whereas Go and JS are both GC'd and don't require this.
For a codebase that relies heavily on exception handling, I can imagine a 1:1 port would require more thought, but compilers generally need to have pretty good error recovery so I wouldn't be surprised if tsc has bespoke error handling patterns that defers error handling and passes around errors as values a lot; that would map pretty well to Go.
Most TypeScript projects are very far away from compiler code, so that this wouldn't resemble typical TypeScript isn't too surprising. Compilers written in Go also don't tend to resemble typical Go either, in fairness.
I'm not involved in this rewrite, but I made some minor contributions a few years ago.
TSC doesn't use many union types, it's mostly OOP-ish down-casting or chains of if-statements.
One reason for this is I think performance; most objects are tagged by bitsets in order to pack more info about the object without needing additional allocations. But TypeScript can't really (ergonomically) represent this in the type system, so that means you don't get any real useful unions.
A lot of the objects are also secretly mutable (for caching/performance) which can make precise union types not very useful, since they can be easily invalidated by those mutations.
In the embedded video they show some of the code side by side and it is just a ton of if statements.
https://youtu.be/pNlq-EVld70?si=UaFDVwhwyQZqkZrW&t=323
to be fair, there's not many ways to implement a token matcher.
though looking at that flood of loose ifs+returns, i kinda wish they used rust :)
I’d guess Rust compile times weren’t worth it if they weren’t going to be taking advantage of the type system in interesting ways.
We had Daniel and Anders on the podcast to talk about the how and why of the native port if anyone is looking for an in-depth discussion → https://www.youtube.com/watch?v=ZlGza4oIleY
I'm really surprised by this visceral reaction to not choosing Rust. Go is a great language and I'd choose it for a majority of projects over Rust just based off of the simplicity of the language and the ability to spin up developers on it quickly. Microsoft is a big corporation.
Why _not_ use Go?
> Why _not_ use Go?
Because of its truly primitive type system, and because Microsoft already has a much better language — C#, which is both faster and can be more high level and more low-level at the same time, depending on your needs.
I am a complete nobody to argue with the likes of Hejlsberg, but it feels like AOT performance problems could be solved if tsc needed it, and tsc adoption of C# would also help push C#/.NET adoption. Once again, Microsoft proves that it's a bunch of unrelated companies at odds with each other.
I'm inclined to trust the judgement of Hejlsberg, the chief architect of C#, in this matter.
> Because of its truly primitive type system
That is the main reason they gave for why they those chose Go. The parent asked "Why _not_ use Go?"
This is not "the main reason", lol, it was never stated as such. The type system could be way more powerful and, having the same general features they would probably had still picked it up.
What realistic contender doesn't have all the same general features as Go? It doesn't exactly have many to choose from, none of them particularly esoteric, and most of them bare necessities required of any language.
Let's be real: You can absolutely write "Go-style" code in just about any language that might have been considered for this. But you wouldn't want to, as a more advanced type system enables entirely different idioms, and it is in bad faith to other developers (including future you) to stray too far from those idioms. If ignoring idioms doesn't sound like a bad idea on day one, you'll feel the hurt and regret soon enough...
Go was chosen because the idioms are generally in alignment with their needs and those idioms are wholly dependent on the shape of its type system.
So they like having all the footguns?
It was stated from the angle of wanting to ship software sometime this century.
But there is probably some truth in what you say as well. Footguns are no doubt refreshing after being engrossed in Typescript (and C#) for decades. At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.
Which, I suppose, is why industry itself keeps ending up taking that to the extreme, cycling between static and dynamic typing over and over again.
> At some point you start to notice that your tests end up covering all the same cases as your advanced types I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests, they either tell some objective truths about your code in compile time (thus reducing the natural need for specific tests) or your type system is simply useless. Either way, I don't think the "industry" is a person that is balancing itself in a pendulum, there are more things under the sun than we can count, and millions of individuals in their everyday projects may not reason things this way, and instead just chose to "well, person X said this language is more maintainable and readable, and I trust X, so I'll use it" (which is a rational thing to do to some extent).
> I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests
At the extreme end of the spectrum that starts to become true. But the languages that fill that space are also unusable beyond very narrow tasks. This truth is not particularly relevant to what is seen in practice.
In the realm of languages people actually use on a normal basis, with their half-assed type systems, a few more advanced concepts sprinkled in here and there really don't do anything to reduce the need for testing as you still have to test around all the many other holes in the type system, which ends up incidentally covering those other cases as well.
In practice, the primary benefit of the type system in these real-world languages is as it relates to things like refactoring. That is incredibly powerful and not overlapped by tests. However, the returns are diminishing. As you get into increasingly advanced type concepts, there is less need/ability to refactor on those touch points.
Most seem to agree that a complete type system is way too much (especially for general purpose programming), and no type system is too little; that a half-assed type system is the right balance. However, exactly how much half-assery is the right amount of half-assery is where the debate begins. I posit that those who go in deep with thinking less half-assery is the way eventually come to appreciate more half-assery.
> I don't think the "industry" is a person
Nobody does.
Hmmm, I think this is an interesting discussion. There's many sides I need to respond here, maybe I will not be able to cover everything but here I go.
See, I fundamentally disagree that those languages are "unusable beyond very narrow tasks", because I never stated that only a complete and absolutely proven type system can provide those proofs. In fact, even a relatively mid-tier (a little bit above average) type-system like C#'s can already provide enormous benefits in this regard. See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects, in C# you don't have to do this (because the type system dictates the shape). You also have to be very careful around possibly null objects and values, which in a language with "proper" nullable types (and support from it in the type system and static checkers) like C# can be lowered vastly (if you use the resource, naturally). C# is also a language that "brings the types into runtime" through reflection, so it will even bring you things that you don't need to test in your code (only when developing the library) like reflection for example (you will not see things that are meant to assert shapes, like 'zod' or 'pydantic' in C# or other mid-tier typed languages for example). C#'s type system also proves many things about the safety of your code, for example you basically never need to test your usage of Spans, the type system and static analysis will already rule out most problematic usages of those things. You also never need to test if your int is actually a float because some random place in your code it was set to be so (like in JS), you also never need to test against many other basic assumptions even an extremely basic type system would give you (even Go's one).
This is to say that, basically, this don't hold true for relatively simple type systems. I'm also yet to see this holding true for more advanced ones, for example: Rust is a relatively well used language for a lot of low-level projects. I never saw someone testing (well bounded safe) rust code for basic shapes of types, nor for the conclusions the type system provides when writing on it. For example, testing if the type system was really able to catch that ownership transference happening here, of it is really safe to assume that there's only one mutable reference to that object after you called that method, or if the destructor of the object is really running in the end of the scope of the function, or even if the overly complex associated type result was actually what you meant it to be (in fact, if you would ever use those complicated types, it would be precisely to have very strong compile-time guarantees that both a test would not be able to cover for -- entirely, and that you would not write unit tests specifically for in the first place). So I don't think it is true that you need a powerful type system to see the reduction in tests that you would need to write in a completely dynamically typed language, nor I think it is true when you start having really powerful type constructs, that you will come to this conclusion """start to notice that your tests end up covering all the same cases as your advanced types""". I also don't think that you need to go to the extreme of this spectrum to see those benefits, they appear gradually and increase gradually as you move towards the end (when you end up with more extremely uncommon things like dependent typing, refinement types or effect systems).
I also certainly don't agree that it does matter that "most people" think or don't think about powerful type systems and the languages using them, it matters more that the right people are using them, people that want to be benefitted from this, than the everyday masses (this is another overly complex disccussion tho). And while I can understand the feelings you have towards the "low end of half-assery type systems", and even agree to a certain reasonable degree (naturally, with my own considerations), I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason). It is enough to recognize that a half-assery type-system usually gets the job done and that's it, completely fine and okay, it may even be faster to write, instead of trying to justify that we should "pursue primitive type systems" because of the fact that we can do things well on them. Maybe I'm digressing to much, it's hard to respond to this comment in a satisfactory manner.
>> I don't think the "industry" is a person
> Nobody does.
Yeah, this was not a very productive point of mine, sorry.
> I fundamentally disagree that those languages are "unusable beyond very narrow tasks"
Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.
The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.
> See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects
Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.
I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.
> it matters more that the right people are using them
Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.
> I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason).
Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?
Overly expressive type systems have way more potential for footguns than simple type systems. In fact, I would say that overly expressive type systems make it easy to create unmaintainable code (still waiting on this showstopping bug which nobody can debug because it uses overly expressive types in TS: https://github.com/openapi-ts/openapi-typescript/issues/1769)
I don't think TypeScript is an example of what people would call a "properly expressive type system". Sure, it is very expressive, but it is made to cover all the gaps JavaScript as a language has in a generally type safe manner, and this calls for an EXTREMELY complex and open type system, much more than most languages would ever have, so I don't think this is really appliable as an example. The gap between maintainable code and unmaintainable one sits between the chair and the screen, not in the type system of the language the person is using, the language merely makes that person more or less able to encode more things in specific places that can become unmaintenable (and anecdotally, most unmaintenable code I know don't even use complex type system features, it's just plain old messy state mutating things scatered all around).
Do we need to have these conversations weekly?
I wouldn't be asking if there wasn't a visceral reaction from Rust devs. I must have missed previous discussions on other threads.
Is this visceral reaction in the room with us now?
Edit: I have reached the bottom of the thread and still have not seen this visceral reaction mentioned by the OP.
https://github.com/microsoft/typescript-go/discussions/411
There's more reactions here. I think devs have lost the plot, tbh.
I am not sure if we see the same thread. There is one reaction from "Rust" dev (who seems have a very new account on github) on why not rust. Most of the others seem to be from C# side. The pattern also seems to be the same on reddit thread. There is one post about why not rust, equally (or more depending how you weigh) is how other people react to this news.
What is weird is how much people talk about how other people react. Modern social media is weird
There's at least 3 top-level threads criticizing the decision not to rewrite in Rust. Including a RIR banner ad posted in the replies.
Holy Language Wars are a spectator sport as old as the internet itself. It's normal to comment on one side fighting another. What's weird is pretending not to see the fighting
After years of PHP, I came to typescript nearly 4 years ago (for web front and backend development). All I can say is that I really enjoy using this programming language. The type system is just about enough to be helpful, and not too much to be in your way. Compiling the codebase is quite fast, compared to other languages. With a 10x, it will be so much fun to code.
Never been a big fan of MS, but must say that typescript is well done imho. thanks for it and all the hard work!
microsoft has historically been great at programming languages. qbasic, visual basic, c#, and f# are all excellent.
Meanwhile .NET developers are still waiting for Microsoft to use their own "inventions" like Blazor, .NET MAUI, Aspire, etc. for anything meaningful. Bless them.
Aspire is made with Blazor
I know this is a port but I really hope the team builds in performance debugging tools from the outset. Being able to understand _why_ a build or typecheck is taking so long is sorely missing from today's Typescript.
Yes, 100% agree. We've spent so much time chasing down what makes our build slow. Obviously that is less important now, but hopefully they've laid the foundation for when our code base grows another 10x.
That's a pretty misleading clickbait title. TypeScript isn't getting 10x faster; the TypeScript compiler is getting 10x faster.
I would argue it needs editing, as it violates the HN guideline:
> use the original title, unless it is misleading or linkbait; don't editorialize.
There isn't a TypeScript runtime, it is just a JavaScript/ECMAScript compiler/transpiler with a type checking and language server
My initial interpretation of the title was that the TS team was adding support for another, faster, target such as the .NET runtime or native executables. The title could use some editing.
Sounds like they're automatically generating Go code from ts in some amount [0]. I wonder if they will open the transpilation effort, in this way you'd create a path for other TypeScript projects to generate fast native binaries
Opened discussion [1]
- [0] https://github.com/microsoft/typescript-go/discussions/410
- [1] https://github.com/microsoft/typescript-go/discussions/467
The automatic generation was mainly a step to help with manual porting, since it requires so much vetting and updating for differences in data layout; effectively all of the checker code Anders ported himself!
It seems that they port the code manually, probably with the help of LLMs.
https://github.com/microsoft/typescript-go/commits?after=dad...
I'd like to see if it makes a difference to the version of DOOM that runs in the TypeScript type system.
https://news.ycombinator.com/item?id=43184291
https://www.youtube.com/watch?v=0mCsluv5FXA
hi! author of the Doom thing, here. while I won't be the one to try, my answer is "absolutely yes, it will make a massive difference". Sub-1-day Doom-first-frame is probably a possibility now, if not much more because actually the thing that was the largest bottleneck for Doom-in-TypeScript-types was serializing the type to a string, which may well be considerably more than 10x faster. Hopefully someone will try some day!
i guess they started rewrite exactly because of doom performance. timelines match.
Haha this was my first thought too
I wonder, for a Microsoft project, why not C#? Would have been a nice win for the home team.
Anders explain why Go in this podcast:
https://youtu.be/ZlGza4oIleY?t=1005
There’s also an FAQ discussion about the language choice: https://github.com/microsoft/typescript-go/discussions/411
TL:DR;
- Native executable support on all major platforms
- He doesn't seen to believe that AOT compiled C# can give the best possible performance on all major platforms
- Good control of the layout of data structures
- Had to have garbage collection
- Great concurrency support
- Simple, easy to approach, and great tooling
So wild that most of these points were something C# was supposed to be good at, and they all boil down to "its just not as good in C# as in Go"
Yea, sounds like cross platform AOT compiled C# not being mature and performant was a big reason that C# was rejected.
One other thing I forgot to mention was that he talked about how the current compiler was mostly written as more or less pure functions operating on data structures, as opposed to being object oriented, and that this fits very well with the Go way of doing things, making 1:1 port much easier.
> sounds like cross platform AOT compiled C# not being mature and performant was a big reason
I don't think it was the performance. C# is usually on par or faster than Go.
Could be the lack of maturity but also that I believe Go produces smaller binaries which makes a lot of sense for a CLI.
I never heard C# being fasted than Go, except on certain batched jobs and even there Go can be better.
For example, look at the Techempower benchmarks.
I benchmarked HTML rendering and Dotnet was 2-3x faster than Go using either Templ or html/template.
Etc.
The c# benchmarks where they didn't use the framework to do any of the actual templating?
those hardcoded byte arrays are how everyone does templating everywhere right?
or are you talking about after they changed back their "platform" test to not do that and is substantially slower than go
https://dusted.codes/how-fast-is-really-aspnet-core
This is sorely outdated. Although for anyone with an axe to grind dustin’s articles are convenient enough.
Immaturity of native AOT sounds like a likely culprit here. If they're after very fast startup times running classic C# is out. And native AOT is still pretty new.
You can write pure functions operating on data structures in C#, it's maybe not as idiomatic as in Go, but it should not cause problems.
it's maybe not as idiomatic as in Go, but it should not cause problems.
Based on interviews, it seems Hejlsberg cares a lot about keeping the code base as idiomatic and approachable as possible. So it's clearly a factor.
I don't really get the OOP arguments from Anders. You don't need to do OOP stuff in C# - just write a bunch of static functions if you want. However, I totally get the AOT aspect. Creating a simple cli app meant for wide distribution in .NET isn't great because you either have to ship the runtime or try to use AOT which is very much a step out. I have come to the same conclusion and used Go on some occasions for the same reason despite not knowing it very well.
If doing a web server, on the other hand, these things wouldn't matter at all as you would be running a container anyway.
same reason i hate gradle/maven/ant: shipping a big runtime that many devs won't have installed for a build tool is bad. even with AOT, you still need a dotnet runtime.
A Microsoft project led by the chief architect of C#, no less.
Given the direction and efforts into projects like rspack, rolldown, etc. Why were they not considered as possible collaboration projects or integrations for this?
This isn't a knock against Go or necessarily a promotion of Rust, just seems like a lot of duplicated effort. I don't know the timelines in place or where the community projects were vs. the internal MS project.
So the end goal is that I can write a typescript application and deploy an executable to my server? Or is it just to deliver faster versions of typescript tools and MS developed typescript applications?
This is amazing. Everyone that picked TS for a big project was effectively betting that someone would do this at some point, so it's incredible to see it finally happen. Thanks to everyone involved!
Typescript compiles to javascript, so does this not prove what people have been screaming from the rooftops for so long that there's a significant performance penalty with typescript for almost no actual benefit?
> a significant performance penalty with typescript
There's a significant performance penalty for using javascript outside the browser.
I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.
It's generally also very difficult to make a JS program as fast as even a naive go program, and the performance tooling for go is dramatically more mature.
No.
You seem to be referring to runtime performance of compiled code. The announcement is about compile times; it's about the performance of the compiler itself.
One question that springs to mind is the in-browser "playground" and hosted coding use-case. I assume WASM will be used in that scenario. I'm wondering what the overhead is there.
Main overhead is shipping Go's WASM runtime to the client
Wow, this is huge! A 10x speedup is going to be game-changing for large TypeScript codebases like ours. I've been waiting for something like this - my team's project takes forever to typecheck on CI and slows down our IDE.
Hopefully this would also reduce the memory footprint because my VS Code intelisense keeps crashing unless I give it like 70% of my RAM, its probably because of our fairly large graphql.ts file which contains auto-generated grapqhl types.
It’s not obvious from the text, but the compiler was previously written in TypeScript (which was kind of a strange choice for the language to write a compiler in).
TypeScript compiler is more of a transpiler, not a typical compiler that creates a binary. I don't think it was weird choice.
Bootstraping compilers is a common activity and TypeScript is a nice language.
“Nice” doesn’t mean “well suitable for writing a compiler in”. It’s strange to think that all languages should be equally good for writing all kinds of things, and choosing a web language for a non-web task is doubly strange.
Yep. I remember years ago when they celebrating getting the C# compiler working in C#.
That was the Roslyn project! Yes they were also excited that it would allow more devs to hook into the computer and also enhance it.
Its not strange, its very common. Its called "bootstrapping".
> Bootstrapping is a fairly common practice when creating a programming language. Many compilers for many programming languages are bootstrapped, including compilers for ALGOL, BASIC, C, C#, Common Lisp, D, Eiffel, Elixir, Go, Haskell, Java, Modula-2, Nim, Oberon, OCaml, Pascal, PL/I, Python, Rust, Scala, Scheme, TypeScript, Vala, Zig and more.
https://en.wikipedia.org/wiki/Bootstrapping_(compilers)
Yet people wouldn’t write a Fortran compiler in Fortran, or a MATLAB compiler in MATLAB.
I don't consider it strange, and I'm not alone: https://news.ycombinator.com/item?id=37171801
is it not common to write compilers for languages in the language being compiled itself? rust does this i think?
It is fairly common, yes. Sometimes those compilers (or interpreters) aren't the primary implementation, but it's certainly a thing that happens often.
Most of the Rust compiler is in Rust, that's correct, but it does by default use LLVM to do code generation, which is in C++.
note that as others have said, "compiled" is a stretch, but nevertheless...
Programs that are less than full compilers in some sense can be bootstrapped.
For instance, this compiler for a pattern matching notation has parts of it implementation using the notation itself:
https://www.kylheku.com/cgit/txr/tree/stdlib/match.tl
Some pattern matching occurs in the function match-case-to-casequal. This is why it is preceded by a dummy implementation of non-triv-pat-p, a function needed by the pattern matching logic for classifying whether a pattern is trivial or not; it has to be defined so that the if-match and other macros in the following function can expand. The sub just says every pattern is nontrivial, a conservative guess.
non-triv-pat-p is later redefined. And it uses match-case! So the pattern matcher has bootstrapped this function: a fundamental pattern classification function in the pattern matcher is written using pattern matching. Because of the way the file is staged, with the stub initial implementation of that function, this is all boostrapped in a single pass.
Few things are more Microsofty than a team reaching over to a competitor's language instead of using their own and to boot none of the reasons given so far seem credible, good job to the team nonetheless.
Totally agree about reasons, they have some hidden agenda behind this decision that they don't want to disclose. Rewriting in native code allows step-by-step rewrite using JS runtime with native extensions, but moving to a different VM mandates big rewrite.
My most plausible guess would be that compiler writers don't want to dig into native code and performance, writing a TS to Go translator looks like a more familiar task for them. Lack of JS version performance analysis anywhere in the announcements kinda confirms this.
My read on why Go and not AOT C# is it would be more difficult to get a C# programmers to give up idiomatic OOP in C# than it would be to get C# programmers to switch to Go. Go is being used as a forcing function to push dev cultural change. This wouldn't generalize to teams that have other ways of dealing with cultural change.
I love all this native tooling for JS making things faster.
I kinda wonder, though, if in 5 or 10 years how many of these tools will still be crazy fast. Hopefully all of them! But I also would not be surprised if this new performance headroom is eaten away over time until things become only just bearable again (which is how I would describe the current performance of typescript).
Even if they freeze typescript development after the native implementation, given that the current performance was apparently acceptable to the current users, type complexity will just grow to use up the headroom
Plus, using TS directly to do runtime validation of types will become a lot more viable without having to precompile anything. Not only serverside, we'll compile the whole thing to WASM and ship it to the client to do our runtime validation there.
Syntax podcast has a conversation with Anders and Dan about it here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=1s
Ah that'll be the thing Wes was under NDA about and teasing how excited he was on twitter last week!
haha yep, everyone thought it was related to Vite. which I guess I kinda is?
I was hoping for something related to vacuums, but this is great too
The key:
> immutable data structures --> "we are fully concurrent, because these are what I often call embarrassingly parallelizable problems"
The relationship of their performance gains to functional programming ideas is explained beginning at 8:14 https://youtu.be/pNlq-EVld70?feature=shared&t=522
I guess this helps explain why Microsoft has their own fork on the Go language.
This will be very welcome. I've been working on refactoring very large Typescript files in a very large solution in VS2022. Sometimes it gets into a state where just editing the code or copy/pasting causes it to hang for a few seconds and the fans on my workstation to take off like a jet engine. The typing advantages my team has gotten from migrating our codebase to Typescript has been invaluable, but the performance implications really hurt.
Any plans for a AOT version of Typescript with strict typing that targets WASM or LLVM?
If you squint, Porffor[1] might end up being something like that.
It doesn't use type hints yet, and the difficulty there is that you'd need a sound type system in order to rely on the types. You may be able to use type hints to generate optimized and fallback functions, with type guards, but that doesn't exist yet and it sounds like the TypeScript team wants to move pretty quickly with this.
[1]: https://porffor.dev/
This is what I would have liked too: Figure out a sufficient subset of TypeScript that can be compiled to native/WASM and then write TSC in that subset.
While I like faster TSC, I don't like that the TypeScript compiler needs to be written in another language to achieve speed; it kind of reminds everyone that TS isn't a good language for complicated CPU/IO tasks.
Given that the TypeScript team has resigned to the fact that JavaScript engines can't run the TypeScript compiler (TSC) sufficiently fast for foreseeable future and are rewriting it entirely in Go, then it is unlikely they will seek to do AOT.
This already exists, Static TypeScript
https://makecode.com/language
https://www.microsoft.com/en-us/research/publication/static-...
If it is not supported by the same team that supports typescript it is not really usable in real world applications.
I guess the same applies for using Go then.
In Golang, wow. That gives me more confidence to adopt Go in projects.
This is frustrating:
> The JS-based codebase will continue development into the 6.x series, and TypeScript 6.0 will introduce some deprecations and breaking changes to align with the upcoming native codebase.
> While some projects may be able to switch to TypeScript 7 upon release, others may depend on certain API features, legacy configurations, or other constraints that necessitate using TypeScript 6. Recognizing TypeScript’s critical role in the JS development ecosystem, we’ll still be maintaining the JS codebase in the 6.x line until TypeScript 7+ reaches sufficient maturity and adoption.
It sounds like the Python 2 -> 3 migration, or the .Net Framework 4 -> .Net 5 (.Net Core) migration.
I'm still in a multi-year project to upgrade past .Net Framework 4; so I can certainly empathize with anyone who gets stuck on TS 6 for an extended period of time.
Better a language that deprecates and breaks things at regular intervals of time compared to a language that has Forever Backward Compatibility like C++ and evolves into a mutated, tentacled monster that strangles developers who are trying to maintain a project.
I lived and worked through the Python 2->3 fiasco, working on a Python library that had to run on both versions. I have since abandoned the language. Python3 was both slower and and not backwards compatible whereas TSC 7 is 10x faster and uses half the memory. I'm not worried.
This is mostly about the tooling and ecosystem, they want to stop things from depending on the internal workings of the compiler. If you just want to write and compile TS you'll be fine, it does not mean breaking changes to actual TypeScript grammar.
Yeah, this is not ideal. I’m hoping that the breaking changes don’t affect the code at my work, since we also had to spend multiple years on a major .NET Core transition. I want the faster compiles right away, not in a few years.
I really wonder why this project have not been developed in .NET core. I would have then been possible to embed this in .NET projects increasing the available number of libraries in the ecosystem. Also it woul have leverages .NET GC which is better than Go. Rewriting in Go really doesn't make sense to me.
“Developers rewrite tools from dynamic language to statically compiled one - improves performance by 10x”
Also, what’s up with 10x everywhere? Why not 9.5x or 11x?
It should be in OCaml. This is why I think OCaml should be compilable to the Go runtime/ABI.
Ive been dreaming about this for years! Never been so pumped.
Oh man, this is great. I've been having performance issues with TSC for language services.
My theory - that Go will always be the choice for things like this when ease, simplicity, and good (but not absolute) performance is the goal - continues to hold.
This is great news. We actually use esbuild most of the time to transpile TS files because tsc is so slow (and only run tsc in CI/CD pipelines). Coincidentally, esbuild is also golang
Dumb question: is this a 10x speed up in the run-time of TypeScript ... or just the build tooling?
And if it's run-time, can we expect browsers to replace V8 with this Go library?
(I realize this is a noob/naive question - apologies)
This is specifically about the performance of the TypeScript toolchain (compiler, editor experience); the runtime code generated is the same. TypeScript is just JS with types.
Just building (and supporting features like LSP)
There is no Typescript runtime, it's just a transpiler.
Not sure if this point was brought up but I think it's worth considering.
If the Typescript team were to go with Rust or C# they would have to contend with async/await decoration and worry about starvation and monopolization.
Go frees the developer from worrying about these concerns.
Faster compilation is great, but what I'm really excited for is a faster TS Language Server. Being able to get autocomplete hints, hover info, goto definition, error squiggles and more anything close to 10x faster is going to be revolutionary when working in large TS codebases.
Funny, until now I always thought that TypeScript is JavaScript with some C# vibes
https://news.ycombinator.com/item?id=43320086
What percentage of the new code was written by an LLM?
So is the language server still not going to match lsp spec? Even though it's getting a complete rewrite?
Has there been any talks/progress on native inclusion of typescript for type checking, for path resolution with node.js without using tsc, ts-node, tsx, native vscode TS debugging and testing support? We are 22 versions down on node.js and still the support seems to be limited at best. Is it possible to maybe share a roadmap of what is being done in this territory
There is this with does not do type checking:
https://devblogs.microsoft.com/typescript/announcing-typescr...
This will only allow you to run your TypeScript in Node, but does not perform type checking, and I don't believe has any plans to. This is from Node.js 23.9.0
https://nodejs.org/api/typescript.html#type-stripping
I don't believe Node has any plans for type checking TS.
Sad to see them using Go and not Anders’s own language (Turbo Pascal 7) for this
Something that kind of got understated in here IMO is the improved refactoring and code intelligence that this will unlock. Very exciting! I am looking forward to all the new tooling and frameworks that come out of this change. TS is already an amazing language and just keeps getting better!
From the post:
> Modern editors like Visual Studio and Visual Studio Code have excellent performance.
Well I am not sure we are on the same page here. Still, fingers crossed.
Typescript was the best thing that ever happened to the web! Thanks Daniel, Ryan and Anders and the rest of the team for making development great for over 10 years! This improvement is amazing!
>Typescript was the best thing that ever happened to the web!
My development in regards to language:
- Javascript sucks I love Python.
- Python sucks I love Typescript.
Will TS v7 only support erasable syntax? e.g. no enums?
TS v5.8 added the --erasableSyntaxOnly option, along with Node.js 23.6 so you can run your TS in Node, which will error on enums (as well as namespaces and other syntax). I haven't found anything that mentions the deprecation of enums when searching, TS v6 is supposed to be as feature compatible with v7 as possible, and since enums are not a type-level feature of JS I wouldn't rely on them.
Right now you can make use of the --erasableSyntaxOnly to find any enums in your code, and start porting over to an alternative. This article lists alternatives if you're interested.
https://exploringjs.com/tackling-ts/ch_enum-alternatives.htm...
I get that the choice was well thought out, but it would have been nice to use the same language as most of the modern tools (Rust)
Do any other well-adopted tools in the ecosystem use Go?
> other well-adopted tools in the ecosystem use Go
esbuild is the most well-known/used project, probably beats all other native bundlers combined. I can't remember anything else off the top of my head.
https://github.com/evanw/esbuild
One question I'm surprised isn't discussed here is how much AI code generation was used in this port. It seems like the perfect use case for it.
Performance is a feature
Can’t wait for a better TSC Doom framerate.
What do they mean by improving editor startup time? Does the editor (I assume vscode?) run the compiler as part of the startup? Why?
There are various ways to (de)couple the compiler to/from vscode, but it's definitely handy to have inline typechecking. Is this possible without running the compiler?
Interesting Microsoft using Golang for this!
I'm sold.
I'll give Typescript yet another go. I really like it and wish I could use it. It's just that any project I start, inevitably the sourcemap chain will go wrong and I lose the ability to run the debugger in any meaningful way.
yes, this will definitely vastly increase the Doom fps, haha (I’m the guy that did that project). But I think there’s a lot more to it than that.
tl;dr — Rust would be great for a rewrite, but Go makes way more sense for a port. After the dust settles, I hope people focus on the outcomes, not the language choice.
I was very surprised to see that the TypeScript team didn’t choose Rust, not just because it seemed like an obvious technical choice but because the whole ecosystem is clearly converging on Rust _right now_ and has been for a while. I write Rust for my day job and I absolutely love Rust. TypeScript will always have such a special place in my heart but for years now, when I can use Rust.. I use Rust. But it makes a lot of sense to pick Go.
The key “reading between the lines” from the announcement is that they’re doing a port not a rewrite. That’s a very big difference on a complex project with 100-man-years poured into it.
Places where Go is a better fit than Rust when porting JavaScript:
- Go, like JavaScript and unlike Rust, is garbage collected. The TypeScript compiler relies on garbage collection in multiple places, and there are probably more that do but no one realizes it. It would be dangerous and very risky to attempt to unwind all of that. If it were a Rust rewrite, this problem goes away, but they’re not doing a rewrite.
- Rust is so stupidly hard. I repeat, I love Rust. Love it. But damn. Sometimes it feels like the Rust language actively makes decisions that demolish the DX of the 99.99% use-case if there’s a 0.001% use-case that would be slightly more correct. Go is such a dream compared to Rust in this respect. I know people that more-or-less learned Go in a weekend and are writing it professionally daily. I also know people that have been writing Rust every day professionally for years and say they still feel like noobs. It’s undeniable what a difference this makes on productivity for some teams.
Places where Go is just as good a fit as Rust:
- Go and Rust both have great parallelism/concurrency support. Go supports both shared memory (with explicit synchronization) and message-passing concurrency (via goroutines & channels). In JavaScript, multi-threading requires IPC with WebWorkers, making Go’s concurrency model a smoother fit for porting a JS-heavy codebase that assumes implicit shared state. Rust enforces strict ownership rules that disallows shared state, or we can at least say makes it a lot harder (by design, admittedly).
- Go and Rust both have great tooling. Sure, there are so many Rust JavaScript tools, but esbuild definitively proves that Go tooling can work. Heck, the TypeScript project itself uses esbuild today.
- Go and Rust are both memory safe.
- Go and Rust have lots of “zero (or near zero) cost abstractions” in their language surface. The current TypeScript compiler codebase makes great use of TypeScript enums for bit fiddling and packing boolean flags into a single int32. It sucks to deal with (especially with a Node debugger attached to the TypeScript typechecker). While Go structs are not literally zero cost, they’re going to be SO MUCH nicer than JavaScript objects for a use-case like this that’s so common in the current codebase. I think Rust sorta wins when it comes to plentiful abstractions, but Go has more than enough to make a huge impact.
Places where Rust wins:
- the Rust type system. no contest. In fairness, Go doesn’t try to have a fancy type system. It makes up for a lot of the DX I complained about above. When you get an error that something won’t compile, but only when targeting Windows because Rust understands the difference in file permissions… wow. But clearly, what Go has is good enough.
- so many new tools (basically, all of them that are not also in JS) are being done in Rust now. The alignment on this would have been cool. But hey, maybe this will force the bindings to be high-quality which benefits lots of other languages too (Zig type emitter, anyone?!).
By this time next week when the shock wears off, I just really hope what people focus on is that our TypeScript type checking is about to get 10 times faster. That’s such a big deal. I can’t even put it into words. I hope the TypeScript team is ready to be bombarded by people trying to use this TODAY despite them saying it’s just a preview, because there are some companies that are absolutely desperate to improve their editor perf and un-bottleneck their CI. I hope people recognize what a big move this is by the TypeScript team to set the project up for success for the next dozen years. Fully ejecting from being a self-hosted language is a BIG and unprecedented move!
A tiny thing that's not relevant to this particular piece of work but is worth having in background when thinking about Go is that while Go would like Python typically be described as "memory safe" unlike Java (or more remarkably, Rust) it is very possible for naive programmers to cause undefined behaviour in this language without realising it.
Specifically if you race any non-trivial Go object (say, a hash table, or a string) then that's immediately UB. Internally what's happening is that these objects have internal consistency rules which you can easily break this way and they're not protected against that because the trivial way to do so is expensive. Writing a Go data race isn't as trivial as writing a use-after-free in C++ but it's not actually difficult to do by mistake.
In single threaded software this is no caveat at all, but most large software these days does have some threading involved.
> I was very surprised to see that the TypeScript team didn’t choose Rust
Typescript is a Microsoft project, right? I’m surprised they didn’t choose C#.
Especially given Anders is the one announcing this, given he was the chief architect of C#. But C# AOT is maybe not as mature/lightweight as a Go binary and clearly startup time here is very important. [Edit: the real reason is in the FAQ posted in a bunch of other comments https://github.com/microsoft/typescript-go/discussions/411]
he went into the C# question in more detail this interview: https://youtu.be/10qowKUW82U?t=1154s
imho Go is a far easier language to learn than Rust, so it lowers the barrier to entry for new contributors.
Which is a massive pro for any open source project
Some big projects have so many people trying to do PRs that it's actually a bit of a hassle to deal with them all. So I don't think maximising the number of contributors should necessarily be one of the top goals for projects that are already big or have guaranteed relevance.
Is learning a language even a thing anymore with $Internal_or_external_LLM_helper plugin available for every IDE? I haven't found syntax lookups to be that much a concern anymore and any boneheaded LLM suggestions are trivial to detect/fix.
You still need to know the language it generates, otherwise you're generating gobbldy gook
> Go and Rust are both memory safe.
Go doesn't seem to be memory safe, see https://www.reddit.com/r/rust/comments/wbejky/comment/ii7ak8... and https://go.dev/play/p/3PBAfWkSue3
"Memory safety" is a term of art meaning susceptibility to memory corruption attacks. They had to come up with some name for it; that's the name they came up with. This is a perennial tangent in conversations among technologists: give something a legible name, and people will try to axiomatically (re)define it.
Rust is memory safe. Go is memory safe. Python is memory safe. Typescript is memory safe. C++ is not memory safe. C is not memory safe.
This is true in that if you pass pointers through go routines, you do not have guarantees about what’s at the end of that pointer. However, this is “by design” in that generally you shouldn’t do that; the overhead the go memory model places on developers is to remember what’s passed as value and what’s passed as a pointer, and act accordingly. The rest it takes care of for you.
The burden placed by rust on the developer is to keep track of all possible mutability and readability states and commit to them upfront during development. (If I may summarize, been a long time since I wrote any Rust). The rest it takes care of for you.
The question of which a developer prefers at a certain skill level, and which a manager of developers at a certain skill level prefers, is going to vary.
I love Rust, but you can play exactly the same game with Rust: https://github.com/Speykious/cve-rs
I mean, no? That's basically a known bug in Rust's compiler, specifically it's a soundness hole in type checking, and you'd basically never write it by accident - go read the guts of it for yourself if you think you might accidentally do this.
At some point a next generation solver will make this not compile, and people will probably invent an even weirder edge case for that solver.
Whereas the Go example is just how Go works, that's not a bug that's by design, don't expect Go to give you thread safety that's not what they promised.
thank you for the clarification. you're right. I guess I was just trying to say that it's a spectrum (even if Rust is very very far along the way towards not having any holes). I can't seem to find it but there's some Tony Hoare or maybe Alan Turing quote or something like that about the only 100% correct computer program to ever exist was the first one.
That is not a violation of memory safety, that's a violation of concurrency safety, which Go doesn't promise (and of course, Rust does.)
Segfaults are very much a memory safety issue. You are correct that concurrency is the cause here, but that doesn't mean it's not a memory safety issue.
That said, most people still call Go memory safe even in spite of this being possible, because, well, https://go.dev/ref/mem
> While programmers should write Go programs without data races, there are limitations to what a Go implementation can do in response to a data race. An implementation may always react to a data race by reporting the race and terminating the program. Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten. These implementation constraints make Go more like Java or JavaScript, in that most races have a limited number of outcomes, and less like C and C++, where the meaning of any program with a race is entirely undefined, and the compiler may do anything at all.
That last sentence is the most important part. Java in particular specifically defines that tears may happen in a similar fashion, see 17.6 and 17.7 of https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.htm...
I believe that most JVMs implement dynamic dispatch in a similar manner to C++, that is, classes are on the heap, and have a vtable pointer inside of them. Whereas Go's interfaces can work like Rust's trait objects, where they're a pair of (data pointer, vtable pointer). So the behavior we see here with Go is unlikely to be possible in Java, because the tear wouldn't corrupt the vtable pointer, because it's inside what's pointed at by the initial pointer, rather than being right after it in memory.
These bugs do happen, but they have a more limited blast radius than ones in languages that are clearly unsafe, and so it feels wrong to lump Go in with them even though in some strict sense you may want to categorize it the other way.
Sure, that's all true. It does limit Go's memory safety guarantees. However, I still believe that just because Java and other languages can give better guarantees around the blast radius of concurrency bugs does not mean that Go's definition of memory safety is invalid. I believe you can justifiably call Go memory-safe with unsafe concurrency. This may give people the wrong idea about where exactly Go fits in on the spectrum of "safe" coding (since, like you mentioned, some languages have unsafe concurrency that is still safer,) but it's not like it's that far off.
On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare. And I don't mean concurrency bugs: Go's concurrency facilities kind of suck, so those are certainly not excessively rare, even if they're less common than I would have expected. However... not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
So how severely you treat this lapse is going to come down to taste. I see the appeal of Rust's iron-clad guarantees around limiting the blast radius, but of course everything comes with limitations. I believe that any discussion about the limitations of guarantees like these should have some emphasis on the real impact. e.g. It's easy enough to see that the issues with memory management in C and C++ are serious based on the security track record of programs written in C and C++, I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
> On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare.
I both want to agree with this, but also point to things like https://www.uber.com/en-CA/blog/data-race-patterns-in-go/, which found a bunch of bugs. They don't really contextualize it in terms of other kinds of bugs, so it's really hard to say from just this how rare they actually are. One of the insidious parts of non-segfaulting data race bugs is that you may not notice them until you do, so they're easy to under-report. Hence the checker used in the above study.
> not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
For sure, absolutely. And I do think that's meaningful and important.
> I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
Yep, and I do suspect it'll be closer to Java than to C.
The Uber page does a pretty good job of summing it up. The only thing I'd add is that there has been a little bit of effort to reduce footguns since they've posted this article; as one example, the issue with accidentally capturing range for variables is now fixed in the language[1]. On top of having a built-in (runtime) race detector since 1.1 and runtime concurrent map access detection since 1.6, Go is also adding more tools to make testing concurrent code easier, which should also help ensure potentially racy code is at least tested[2] (ideally, with the race detector on.) Accidentally capturing named return values is now caught by a popular linting tool[3]. There is also gVisor's checklocks analyzer, which, with the help of annotations, can catch many misuses of mutexes and data protected by mutexes[4]. (This would be a lot nicer as a language feature, but oh well.)
I don't know if I'd evangelize for adopting Go on the scale that Uber has: I think Go works best for shared-nothing architectures and gets gradually less compelling as you dig into more complex concurrency. That said, since Uber is an early adopter, there is a decent chance that what they have learned will help future organizations avoid repeating some of the same issues, via improvements to tooling and the language.
[1]: https://go.dev/blog/loopvar-preview
[2]: https://go.dev/blog/synctest
[3]: https://github.com/mgechev/revive/blob/HEAD/RULES_DESCRIPTIO...
[4]: https://pkg.go.dev/gvisor.dev/gvisor/tools/checklocks
Ah, that's great info, thank you :)
> The TypeScript compiler relies on garbage collection in multiple places
What? And how? And how would that help in Go which has a completely different garbage collection mechanism?
As in: there's no allocation/deallocation code. The code relies on garbage collection to function.
Why not just work with the SWC folks and get the Rust implementation mainlined?
They want exact backwards compatibility with the JS implementation, so they're doing a lit-by-line port.
Kinda shows that there is no practical ML-family language with good concurrency support.
Don't think so, he stated one of the most important reasons was code compatibility, not specifically a good concurrency support (but this was important, indeed). I think even the most functional languages would not be easily compatible with "functional typescript code" without hard modifications. But either way, there is space for innovation in the field, I'm yet to see a ML-family language with concurrency that is as "hands on" as Go is, it would be extremely interesting to see this happening.
Misleading title. TypeScript isn't getting 10x faster. The compiler is 10x faster.
TS is nothing but a compiler
It compiles to JS, one possible read would be that TS compiles to JS which runs 10x faster due to optimizations that can be made.
would be some very wishful reading!
Just use fable and F# instead, your code transpiles to python and rust too
For the small price of 10x slower tooling.
I’ve been using F# full-time for 6 years now. And compiler/tooling gets painfully slow fast.
Still wouldn’t trade it for anything else though.
I wonder how much faster DOOM will run on this
Use browser and web for websites, not applications. For apps, create native downloadable desktop software, which also work offline.
I work at Google (the original and worst offender of this) and I advocate for native binaries all the time.
I prefer having all my apps in the browser.
Funnily enough, that's exactly what they're doing in this announcement. They're rewriting `tsc` in Go and shipping native binaries, rather than shipping JS.
Didn't quite get, they compile ts to js using the compiler now written in Go, right? But we as end users still get js, not a native app.
this is huge! thank you!
Typescript is a nice programming language, Javascript is not, I am glad
Once people figure out how much faster their apps will be, they'll add enough features to slow it down again.
Too bad they didn’t choose Rust, would have loved contributing (not picking up Go, sry)
did you contribute to the current TypeScript codebase? (not intended snarky, just curious)
a couple of commits merged yrs back, things I stumbled on that I used as an excuse to learn more about internals
1ct5 44t t5
I am actually shocked that Anders chose Go over C# for this port.
I wonder how much that would have helped the guy who implemented Doom in TS types only.
So now tsc is a binary, browsers can efficiently bundle it and compile index.ts on the fly ... please?
> browsers can efficiently bundle it
That's really not what's stopping TS being built in to browsers. Have a look at the discussions around the types-as-comments proposal https://tc39.es/proposal-type-annotations/
This kinda begs the question: should we port all backend Typescript code to Go (or Rust) to get a similar runtime performance improvement? Is Typescript generally this inefficient?
You could profile it and find out.
Another commenter pointed out that compilers have very different performance characteristics to games, and I'll include web servers in that too.
tsc needs to start up fast and finish fast. There's not a ton of time to benefit from JIT.
Your server on the other hand will run for how long between deployments?
If your backend is JS and it's too slow for you, then obviously porting it to a machine code binary will speed it up significantly. If you are happy with your backend performance, then does it matter?
Keep in mind most apps made in frameworks aren't using `tsc` but rather existing tools like `esbuild` which are native binaries.
Now make it executable in the go runtime please:)
The post title is a bit misleading. It should say a 10x faster build time, or a 10x faster TypeScript compiler. tsc (compiler) is 10x faster, but not the final TS program runtime. Still an amazing feat! But doom will not run faster
"To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage."
To clarify why it's actually not that ambiguous: TS is not (and does not have) a runtime at all. Even TS-first runtimes like Deno are (1) not TS but its own thing and most importantly (2) just JS engines with a frontend layer that treats TS as a first-class citizen (in Deno's case, V8).
It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further (e.g. by proving that a function diverges) but to my knowledge they currently don't and I don't think there's any in the works (or if that's even possible while maintaining runtime soundness, considering you can "lie" to TS by casting to `unknown` and then back to any other type).
“faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.
Just like if you said faster C++ that could mean the compiler runs faster, or the resulting machine code runs faster.
Just because the compile target is another human readable language doesn’t mean it ceases to be a typescript program.
I didn’t think this particular example was very ambiguous because a general 10x speed up in the resulting JS would be insane, and I have used typescript enough to wish the compiler was faster. Though if we’re being pedantic, which I enjoy doing sometimes, I would say it is ambiguous.
> “faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.
That still wouldn't make sense, in the same way that it wouldn't make sense to say "Python type hints found a way to automatically write more performant Python". With few exceptions, the TypeScript compiler doesn't have any runtime impact at all — it simply removes the type annotations, leaving behind valid JavaScript that already existed as source code. In fact, avoiding runtime impact is an explicit design goal of TypeScript [1].
They've even begun to chip away at the exceptions with the `erasableSyntaxOnly` flag [2], which disables features like enums that do emit code with runtime semantics.
[1] https://github.com/microsoft/TypeScript/wiki/TypeScript-Desi...
[2] https://www.typescriptlang.org/docs/handbook/release-notes/t...
> Python type hints found a way to automatically write more performant Python
I get your point, but... this is exactly the premise of mypyc ;)
Thanks for the clarification. For those of us who don't use TypeScript day to day, I feel that it is ambigious. Without clicking the link, you wouldn't know if it's about a compiler or a runtime. What if they announced a bun competitor?
https://betterstack.com/community/guides/scaling-nodejs/node....
Those are javascript runtimes, not TypeScript runtimes. The point stands.
If you don't know enough about TypeScript to understand that TypeScript is not a runtime, I'm not sure why you would care about TypeScript being faster (in either case).
I thought the title was announcing someone created a Typescript runtime. It is misleading.
Preact was "a faster React", for example.
if typescript code execution got that much faster it might be a reason for someone to look into the language even if they knew nothing about it.
There are plenty of other reasons to consider TypeScript, but again, what code execution are referring to? The V8 JavaScript engine?
that's not the point I was making - gp was wondering why someone who didn't even know typescript compiled to javascript and ran atop a javascript engine would care that it had gotten 10x faster.
From the title, my initial assumption was someone wrote a compiler & runtime for typescript that doesn't target javascript, which was very exciting. And I do work with typescript.
> Without clicking the link, you wouldn't know if it's about a compiler or a runtime
I mean I think generally you’d want to click the link and read the article before commenting
It has become a sport here to criticize titles for not explaining any random thing the commenter doesn't know. Generally these things are either in the article or they are very easily findable with a single web search.
If you have to explain why something is not ambiguous it is by definition ambiguous.
Maybe they aren't the audience. I don't see how this is ambiguous to anyone that actually uses typescript
It was ambiguous to me. I've used TS a few times over the years, so I thought "native TypeScript compiler" meant AOT TS, not a TS compiler written in Go
deno runs typescript and it won't run 10x faster. It is ambiguous.
It seems Deno compiles typescript to JS just like everyone else.
https://docs.deno.com/runtime/fundamentals/typescript/
No. Ambiguous means that a statement has many possible meanings, not simply that something might be confusing.
that would imply the existence of an objective authority on the meaning of the statement, which is debatable
I'm a bit confused:
- It's not ambiguous because they mean $X.
- It is ambiguous because it has many possible meanings.
- It is not ambiguous because it has many possible meanings
there is static hermes from Meta that do AoT compilation to native so I find it actually ambiguous. For a second I thought they did a compiler instead of transpile r.
> It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further
Typescript's type system is unsound so it probably will never be very useful for an optimizing compiler. That was never the point of TS however.
> It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further.
Yeah, that exists. AssemblyScript has an AOT compiler that generates binaries from statically typed code.
AssemblyScript is a very limited subset of the language though.
Unfortunately many TS users have a surface level understanding of TS leading them to believe that TS is "real"
I use TS a lot and still assumed they are embarking on a native runtime/compiler whatever epic journey.
I don't think this is misleading for anyone familiar with Typescript. Typescript itself has no impact on performance, and it is known that the compilation and type-checking speed is often a problem. So I immediately assumed that it was about exactly that.
When I read the title I thought maybe they implemented a typescript to binary (instead of javascript) code compiler that speeds up the program by 10x, it would also have the added benefit of speeding up the compiler by 10x!
I don't think that is too far fetched either since typescript already has most of the type information.
I can think of a DOOM that WILL run faster…
https://youtu.be/0mCsluv5FXA
Ah thanks! I didn't realize there was a Doom running on the TS type system. I stand corrected
That is a really funny coincidence. Of all the examples you could have picked...
lol it just “released” recently. Like in the last couple of weeks. It shook the typescript world.
It’s been a crazy couple of weeks for TS!!
Agree. TypeScript is primarily a programming language. Did they make the language faster? No. Hence, the title is misleading.
For anyone who uses TypeScript on a daily basis it's not ambiguous at all. Everyone who works with TS knows the runtime code is JavaScript code that is generated by the TypeScript compiler. And it's also pretty common knowledge that JavaScript is quite fast, but TS itself is not.
And if this post was about a TS compiler that emitted x86 executables you would be wrong and find out that it is indeed ambiguous.
Why would a hypothetical "tsx86" project write an article titled "10x faster typescript" instead of "10x faster binaries with tsx86 2.0"
If you have to invent things for something to be considered ambiguous, is it really ambiguous?
That's debatable. I think most people that work with TS see it as a syntax extension for JS. Do you think JSX is a programming language?
I don't think it's misleading at all, because you can't run Typescript. Typescript is either compiled, transpiled or stripped down into another language and that's what gets run in the end.
You can't run Java either as it's compiled to bytecode, yet when someone says "we made Java 10x faster" you wouldn't assume that just the compilation got faster, right? When people market Rust projects as blazingly fast nobody assumes it's about compilation, in part because a blazingly fast Rust compiler would be a miracle. Outside of this comment section people have always been using a programming language name for this because everyone knows what they mean.
It would be possible that MS wrote a TypeScript compiler that emits native binaries and that made the language 10x faster, why not?
You could make the same argument of anything but bytecode and even then some would debate if it's really running directly enough on modern CPUs. In the end it still remains that you have the time it takes to build your project in a given language and the runtime performance of the end result. Those remain very useful distinctions regardless of how many layers of indirection occur between source code and execution.
The difference here is that with Typescript, you're not really measuring Typescript's performance, but whatever your output language is. If transpile to Javascript, you're measuring that, if you output Wasm, you measure that, etc, and the result isn't really dictated by Typescript.
Transpiling isn't the only possibility to run TypeScript code, it's just the way to do it right now. A long time ago interpreting was the most common way to run JavaScript, now it's to JIT it, but you can also compile it straight to platform byte code or transpile it to C if you really want. That you could transpile JavaScript to C doesn't mean all ways of doing it would be equally performant though.
Transpiling in itself also doesn't remove the possibility of producing more optimized code, especially if the source has more information about the types. The official TypeScript compiler doesn't really do any of that right now (e.g. it won't remove a branch about how to handle a variable if its type equals a number even if it has the type information to know it can't have been set to one). Heck, it doesn't even (natively, you can always bolt this on yourself) support producing minified transpiled code to improve runtime parsing! In both examples it's not because transpilation prevents optimization though, it's just not done (or possibly worthwhile if TS only ever targets JS runtimes as JS JIT is extraordinarily good these days).
Not really in the case of TypeScript, because (with very small exceptions) when you “compile” TypeScript you are literally just removing the TypeScript, leaving plain JavaScript. It’s just type annotations; it doesn’t describe any runtime behavior at all.
That depends on both the target and the typescript features you use. In many cases, even when down leveling isn't involved, transpiled code can result in more than just stripping type info (particularly common in classes or things with helper functions). There's also nothing stopping a typescript compiler from optimizing transpiled (or directly compiled) code like any other compiler would, though the default typescript tools don't really go after any of that (or even produce a minified version itself using the additional type hints).
But the end result is still a JS runtime.
Agreed, at least usually right now (it doesn't have to be forever, which would probably be the most realistic way for TypeScript to make meaningful runtime gains). That does not preclude the possibility of producing more optimal JavaScript code for the runtime to consume. I give a couple examples of that in the other comments.
Sure you can run Typescript. It's a programming language, someone could always write an interpreter for it.
You could, but currently I'm not aware of any widely used options. Both Deno and Node turn it into Javascript first and then run that.
Tell that to the deno project.
Deno compiles TS to JS before execution.
That could be a little confusing but (generally today) TypeScript does not "run", JavaScript does.
> TypeScript does not "run"
Except in the case of Doom, which can run on anything.
Also it’s 4 times faster but runs multithreaded, which was tricky to do in JavaScript (but easier now).
look, not to argue with a stranger on hacker news, lol, but genuine calm question here: is this really a helpful nit? I know what you're getting at but the blogpost itself doesn't imply that JavaScript is 10x faster. I could complain, about your suggested change, that it's really `build and typecheck` time. It's a title. Sometimes they don't have _all_ the context. That's ok.
It is for me. If someone says TypeScript is faster than X, they rarely mean the build time. I understand other people's points about TypeScript not being a runtime at all and only being a compiler, but when casually saying "TypeScript is faster than say ruby", people do not mean the compiler.
But no one actually says "TypeScript is faster than say ruby". They probably say "node is faster than say ruby" or maybe "bun is faster than say ruby". Perhaps they say "JavaScript is faster than say ruby", although even that is underspecified.
well, thanks for explaining. we might just simply disagree here. when I hear "TypeScript" I think of TypeScript, and when I hear "JavaScript" I think of JavaScript. I know what you mean re: casually speaking, but this is a blogpost from the TypeScript team. That context is there, too. I think if the same title were from an AWS release note, I'd totally see what you mean.
Typescript is JavaScript at runtime. It’s not a separate language, just like Python with type annotations (TypePython?) is just Python at runtime. Both are just type annotations that get stripped away before anything tries to run the code. That’s the genius of the idea and why it’s so easily adopted.
It is quite literally a separate language. Python's type hints are a part of the Python specification and all valid Python type hints will run in any compliant Python runtime. Typescript is not, in any way, valid JavaScript. The moment you add any type syntax, you can no longer run the code in Node or Browsers without enabling a special preprocess step.
Do you think JSX is a separate language?
Yes, JSX is a superset of JS and will not work in any tooling that is not explicitly JSX compatible. JS grammars will not parse it, it's not standard.
That’d be the autism kicking in, you’re gonna have to be 10% less miserable if you want anyone to put up with you.
Then read the article? I don't get it - Typescript, to anyone familiar, is not a language runtime. It does not optimize. It is a transpiler. If you don't even know this much about Typescript, you aren't the audience and lack prerequisite knowledge. Go read anything on the topic.
If someone posted an article talking about the "handedness" of DNA or something, I wouldn't complain "oh, you confused me, I thought you were saying DNA has hands!"
misleading titles are a no-no on HN.
I agree with pseudopersonal in that the title should be changed. technically it's not misleading, but not everyone uses or is familiar with typescript.
It could have been a new TSC that compiles to WASM.
Unfortunately many people only look at headlines, so titles do matter. People take them at face value.
yes, and TypeScript is not JavaScript. Objectively, every element of _TypeScript_, strictly speaking, is well known to be separate.
This seems pedantic. As a TypeScript user who is aware of the conversations about build performance, the title is not ambiguous at all. I know exactly they are talking about build time.
It was ambiguous to me. When someone says making a language X-times faster, it's natural to think about runtime performance, not compile times. I know TS runs on JS runtimes, but I assumed, based on the title, they created/modified a JS runtime to natively run TS fast.
The explanations are of course correct, but I think you're right and there's not much downside to being clearer in the title. Maybe they decided against saying "compiler" because the performance boost also covers the language server.
Does Deno benefit from that?
So I'm +inf as fast using JS
Since you don't execute TypeScript, and TS never has anything to do with the end resulting app, I don't think it was misleading at all.
People seem very hurt that the creator of C# didn't pick C# for this very public project from a multi-trillion-dollar corp. I find it very refreshing, they defined logical requirements for what they wanted to do and chose Golang because it ticked more boxes than C#. This doesn't mean that C# sucks or that every C# project should switch to Golang, but there seems to be a very vocal minority affected by this logical decision.
My favorite benefit of Go over C# is that I don’t have to carry around a dotnet runtime to every service that touches my Typescript code.
Can't the CLR tools just output native binaries now?
Can it? That’s awesome.
Mostly these days I’m only aware of C# when it inconveniences me.
I love their choice of Go because of how simple it is to generate a static executable with no dependencies (ie no dotnet runtime).
With C# you can either bundle dotnet runtime with the executable, or use native AOT, which compiles to a binary without the runtime.
However, in both native AOT and Go you actually have some parts of the runtime bundled in (e.g. garbage collector).
Why not Rust? https://youtu.be/10qowKUW82U?t=769
Why not C#? https://youtu.be/10qowKUW82U?t=1155
But, I was told that programming language choice doesn't matter and that I can write slow/bad code in any language...
/s
You can write slow code in any language, but you cannot write fast code in any language.
I didn't include every variant I've ever read, but there have been no shortage of people saying that the only thing that matters is your algorithms.
Every time I've said that languages like Python, JavaScript, and basically any other language where it's hard to avoid heap allocations, pointer chasing, and copious data copies are all slow, there are plenty of people who come out of the woodwork to inform me that it's all negligible.
no shortage of people saying that the only thing that matters is your algorithms.
To be a little bit fair to those people, I have been in many situations where people go "my matlab/python code is too slow, I must re-write it in C", and I've been able to get an order of magnitude improvement by re-writing the code in the same language. Hell I've ported terrible Fortran code to python/numpy and gotten significant performance improvement. Of course taking that well written code and re-writing that in well written C will probably give you a further order of magnitude improvement. Fast code in a slow language can beat slow code in a fast language, but obviously never beat fast code in a fast language.
For sure. I agree with everything you say, and I've experienced the same thing 100 times, myself--including the specific scenario of speeding up someone's MATLAB code by multiple orders of magnitude by vectorizing the crap out of it. People seem to be almost drawn to quadratic-or-worse algorithms, even when I'd expect them to know better.
I'm just a little bitter because of how many times I've been shushed in places like programming language subreddits and here when I've pointed out how inefficient some cool new library/framework/paradigm is. It feels like I'm either being gaslit or everyone else is in denial that things like excessive heap allocations really do still matter in 2025, and that JITs almost never help much with realistic workloads for a large percentage of applications.
There you go.
All the bootcamp cargo culting crew have pumped these lies such as "the language doesn't matter" or "learn coding in 1 week for a SWE job with JS / TS" and it has caused the increase in low quality software and with several developers asking how to improve or add "performance" optimizations as such.
What we have just seen is that the TS team has admitted that a limit has been reached and *almost always* the solution is either porting it to a compiled language or relying on scaling with new computers with new processors in accordance to Moore's Law to get performance for free.
Now the bootcampers are rediscovering why we need "static typing" and why a "compiled language" is more performant than a VM-based language.
Can you imagine the progress we could've made by now if people just tried to use the right tool for the job instead of trying to make the wrong tool good enough?
All the time spent trying to optimize JITs for JavaScript engines, or alternative Python implementations (e.g., PyPy), and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications. Ugh...
> and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications
This is how we got Graal, why would you call it "fruitless effort"?
Many people say this, but it is obviously bullshit. But most things people say all the time is bullshit, so I would not bother with it that much, it's not like people are saying "Programming languages don't matter, see here my affirmation is backed by a hundred statistics and data heavily reviewed and strong literature", it is more like "Programming languages don't matter, well at least I feel like it, the same way flowers smell like blue or something".
[flagged]
I don't think this is accurate.
Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.
As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.
I think JS can really zoom if you let it. Hamsters.js, GPU.js, taichi.js, ndarray, arquero, S.js, are all solid foundations for doing things really efficiently. Sure, not 'native' performance or on the compile side, but having their computational models in mind can really let you work around the language's limitations.
JS can be pretty fast if you let it, but the problem is the fastest path is extremely unergonomic. If you always take the fastest possible path you end up more or less writing asm.js by hand, or a worse version of C that doesn't even have proper structs.
I find these userland libraries particularly effective, because you'll never leave JS land, conveniently abstracting over Workers, WebGL/WebGPU and WASM.
JS, interestingly, has a notion of integers, but only in the form of integer arrays, like Int16Array.
I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.
AssemblyScript (for WASM) and Huawei's ArkTS (for mobile apps) already exist in this landscape. However, they are too specific in their use cases and have never gained public attention.
you replied to an LLM generated comment. if you look at the posting history you can confirm it
> is this the beginning of a larger trend where JS/TS tooling migrates to native implementations
No, it is not. It is a continuation of an existing trend
You may be interested in esbuild (https://github.com/evanw/esbuild), turborepo (https://github.com/vercel/turborepo), biome-js (https://github.com/biomejs/biome) are all native reimplementations of existing projects in JS/TS. esbuild is written in Go, the others in Rust.
> reveals something deeper: Microsoft prioritized shipping a working solution over language politics
Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".
This is not accusatory, but do you write your comments with AI? I checked your profile and someone else had the same question a few days ago. It's the persistent structure of "it isn't X – it's Y" with the em dash (– not -) that makes me wonder this. Nothing to add to your comment otherwise, sorry.
Sorry for being pedantic but they are using an en dash (–) not an em dash (—) which is a little strange because the latter is usually the one meant for adding information in secondary sentences—like commas and and parentheses. In addition, in most styles, you're not supposed to add spaces around it.
So, I don't think the comment is AI-generated for this reason.
"The en-dash is also increasingly used to replace the long dash ('—', also called an em dash or em rule). When using it to replace a long dash, spaces are needed either side of it – like so." https://en.wikipedia.org/wiki/En_(typography)
You're right, oops. I agree with your reasoning (comment still gives off slop vibes but that's unprovable). But the parent has been flagged, so I'm not sure if that means admins/dang has agreed with me or if it was flagged for another reason.
I think anyone can flag a comment, and if enough people flag a comment, it will become flagged.
em-dash is shift-option-hyphen on macOS, so it's not a good heuristic—I use it myself.
They're using en-dash which is even easier: option-hyphen.
This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.
This is definitely AI, repetitive and reads in style of a marketing copy / sensational report.
They're not "definitely" an AI. Sounds like a normal Go enthusiast to me.
A Go enthusiast who’s never heard of esbuild? Not impossible, but unlikely.
You know, some humans use the correct dash too...
The em dash thing is not very conclusive. I have been writing with the em dash for many years, because it looks better and is very accessible on Mac OS (long press on dash key), while carrying a different tone than the simple dash. That, and I read some Tristram Shandy.
Two hyphens (-) make a em dash (—) on Apple devices and many word processors.
In the pre-Unicode days, people would use two hyphens (--) to simulate em dashes.
That would explain a lot.
> The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day.
I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.
Also explained in the FAQ: https://github.com/microsoft/typescript-go/discussions/411
I can't imagine the devs at Microsoft have any issues with C#'s "deployment model."
I can imagine C# being annoying to integrate into some CIs, for instance. Go fits a sweet spot, with its fast compiler and usually limited number of external dependencies.
I assume they picked Go because the binaries can be very stand alone.
I don't see why Go deployment model is superior to C#. You can easily build native binaries in C# as well nowadays.
I get the impression that, because Go has a lot of similar semantics to Typescript, it was easier to port to Go than other languages.
From https://github.com/microsoft/typescript-go/discussions/411
> Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
> We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.
Come on… 1 statically linked executable and it can cross build incredibly easily. There's no comparison even.
Smaller binary sizes for easier + cheaper distribution might be a factor.
You can make very small binaries in C# if you want to.
But, the team posted their rationale for Go here: https://github.com/microsoft/typescript-go/discussions/411
It *almost* sounds like you're telling the authors, one of which posted this, what their motivations are.
>The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day. Even Anders Hejlsberg – father of C# – chose Go for pragmatic reasons!
I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.
Go with parametric types is already a reasonably expressive language. Much more expressive than C in which a number of compilers has been written, at least initially; not everyone had the luxury of using OCaml or Haskell.
There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.
Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.
> it signals we've hit fundamental limits in JS/TS for systems programming
Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.
Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.
> When a language team abandons self-hosting (TS in TS) for raw performance (Go), it signals we've hit fundamental limits in JS/TS for systems programming.
I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.
I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.
But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.
Back in my day, writing compilers was part of systems programming.
Yeah, the term has changed meaning several times. Early on, "systems programmer" meant basically what we call a "developer" now (by opposition to a programmer or a researcher).
At that time, what would have been the distinction between "programmer" and "developer"?
> Go's simplicity
I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.
I wish there was a language like rust without the borrow checking and lifetimes that was also popular and lives in the same area as go. Because I think go is actually the best language in this category but it’s only the best because there is nothing else. All in all golang is not an elegant language.
O'Caml is similar, now that it has multicore. Scala is also similar, though the native code side (https://scala-native.org/en/stable/) is not nearly as well developed as the JVM side.
The problem with O'Caml is it won't get popular because people are afraid of FP. But I would be totally down to use it.
Rust loses a lot of its nice properties without borrow checking and lifetimes, though. For example, resources no longer get cleaned up automatically, and the compiler no longer protects you against data races. Which in turn makes the entire language memory unsafe.
I believe OP meant to give it a GC like in Go, while keeping other features from Rust from enums/match/generics/traits/etc etc.
This should prevent most of the memory safety issues, though data races could still be tricky (e.g. Go is memory unsafe due to data races)
OTOH it would still have Rust's sane type system and all the nice features it makes possible.
OCaml and Haskell already have that nice type system (and even more nice). If OCaml's syntax bothers you, there is Reason [1] which is a different frontend to the same compiler suite.
Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.
[1]: https://reasonml.github.io/
[2]: https://gleam.run/
That language is Rust, though.
It’s not popular compared to Go/Rust, but many find Nim scratches that itch:
https://nim-lang.org/
Zig is another popular neo-language in the same rough space.
tl;dr: TypeScript compiler (!) was implemented in TypeScript, new one is in Go
half of the perf gain is from moving to native code, other half is from concurrency
[dead]
[dead]
tl;dr
10x faster compilation, not runtime performance
[dead]
still javascript though
So in order to get "Faster TypeScript" you have to port the existing "transpiler" in a complied language that delivers said faster performance.
This is an admission that these JavaScript based languages (including TypeScript) are just completely unsuitable for these performance and scalable situations, especially when the codebase scales.
As long as it is a compiled language with reasonable performance and with proper memory management situations, Go is the unsurprising choice, but the wise choice to solve this problem.
But this choice definitively shows (and as admitted by the TS team) how immature both JavaScript and TypeScript are in performance and scalability scenarios and should be absolutely avoided for building systems that need it. Especially in the backend.
Just keep it in the frontend.
They're not getting "faster typescript", they're getting "a faster typescript transpiler / type checker"; subtle but important difference. The runtime of TS is Javascript engines, and most of "typescript transpilation" is pretty straightforward removal of type information.
Anyway, JS is not immature in performance per se, but in this particular use case, a native language is faster. But they had to solve the problem first before they could decide what language was best for it.
> ctrl-f "rust" > 93 matches
sigh
I don't get it.
Why is typescript not already a standard natively supported by browers?!
It kinda is already; strip type information and you've got valid JS. NodeJS supports running Typescript nowadays with the exception of some uneraseable syntax that is being discouraged, I'm sure it's only a matter of time before that bubbles up to V8 and other browser JS engines.
Curious how this is going to affect Cursor - I'm assuming it'll just be a drop-in replacement and we can expect Cursor to get the same speed-up as VSCode.
And the lesson is; don't build anything that needs to be performant in TypeScript because it's so slow?
Correct.
Very pumped to see how this improves the experience in VSCode.
I've been revisiting my editing setup over the last 6 months and to my surprise I've time traveled back to 2012 and am once again really enjoying Sublime Text. It's still by far the most performant editor out there, on account of the custom UI toolkit and all the incredibly fast indexing/search/editing engines (everything's native).
Not sure how this announcement impacts VSCode's UI being powered by Electron, but having the indexing/search/editing engines implemented in Go should drastically improve my experience. The editor will never be as fast as Sublime but if they can make it fast enough to where I don't notice the indexing/search/editing lag in large projects/files, I'd probably switch back.
> Not sure how this announcement impacts VSCode's UI being powered by Electron
It has no bearing on this at all.
Sublime Text has been my main since at least then as well. I can _see_ the lag in VSCode.
I can see why they didn't use Rust, I've written little languages in that myself, so I know what is involved, even though I like the language a lot. But I'm quite surprised they didn't use C#. I would have thought ahead-of-time optimized C# would give nearly the same compilation speed as Go. They do seem to be leaning into concurrency a lot so maybe its more about Go's implementation of that (CSP-like), but doesn't .Net have a near-equivalent to that? Have not used it in a while.
Also I get the sense from the video that it still outputs only JS. It would be nice if we could build typescript executables that didn't require that, even if was just WASM, though that is more of a different backend rather than a different compiler.
Edit: C# was addressed: https://github.com/microsoft/typescript-go/discussions/411#d...