The author’s point about “not caring about pip vs poetry vs uv” is missing that uv directly supports this use case, including PyPI dependencies, and all you need is uv and your preferred Python version installed: https://docs.astral.sh/uv/guides/scripts/#using-a-shebang-to...
I solved this in 2019 with PyFlow, but no one used it, so I lost interest. It's an OSS tool written in rust that automatically and transparently manages python versions and venvs. You just setup a `pyproject.toml`, run `pyflow main.py` etc, and it just works. Installs and locks dependencies like Cargo, installs and runs the correct Python version for the project etc.
At the time, Poetry and Pipenv were the popular tools, but I found they were not sufficient; they did a good job abstracting dependencies, but not venvs and Python version.
My best guess: I'm bad at marketing, and gave up too soon. The feedback I received was generally "Why would I use this when Pip, Pipenv and Poetry work fine?". To me they didn't; they were a hassle due to not handling venvs and Py versions, but I didn't find many people to also have had the same problem.
I’ve started migrating all of my ~15 years of one-off python scripts to have this front matter. Right now, I just update when/if I use them. I keep thinking if were handier with grep/sed/regex etc, I’d try to programmatically update .pys system-wide. But, many aren’t git tracked/version controlled, just laying in whatever dir they service(d). I’ve several times started a “python script dashboard” or “hacky tools coordinator” but stop when I remember most of these are unrelated (to each-other) and un/rarely used. I keep watching the chatter and thinking this is probably an easy task for codex, or some other agent but these pys are “mine” (and I knew^ how they worked when I wrote^ them) and also, they’re scattered and there’s no way I’m turning an agent loose on my file system.
^mostly, some defs might have StackOverflow copy/pasta
The last time I commented extolling the virtues of uv on here, I got a similar reply, pointing out that PEP 723 specs this behavior, and uv isn’t the only way. So I’ll try again in this thread: I’m bullish on uv, and waiting for Cunningham.
This is an awesome features for quick development.
I'm sure the documentation of this featureset highlights what I'm about to say but if you're attracted to the simplicity of writing Python projects who are initialized using this method, do not use this code in staging/prod.
If you don't see why this is not production friendly it's for the simple a good.reaaon that creating deployable artifacts packaging a project or a dependency of a project this uses this method, creating reproducible builds becomes impossible.
This will also lead to builds that pass your CI but fail to run in their destination environment and vice versa due to the fact that they download heir dependencies on the fly.
There may be workarounds and I know nothing of this feature so investigate yourself if you must.
This isn't really "alternatively"; it's pointing out that in addition to the shebang you can add a PEP 723 dependency specification that `uv run` (like pipx, and some other tools) can take into account.
Yeah, but you need Nix. If we are reaching out for tools that might not be around, then you can also depend on `curl | sudo bash` to install Nix when not present.
That shebang will work on GNU link based systems, but might not work elsewhere. I know that’s the most popular target, but not working on macOS, BSDs, or even busybox.
> Then you don't even need python installed. uv will install the version of python you specified and run the command
What you meant was, "you don't need python pre-installed". This does not solve the problem of not wanting to have (or limited from having) python installed.
I thought that too, but I think the tricky bit is if you're a non-python user, this isn't yet obvious.
If you've never used Clojure and start a Clojure project, you will almost definitely find advice telling you to use Leiningen.
For Python, if you search online you might find someone saying to use uv, but also potentially venv, poetry or hatch. I definitely think uv is taking over, but its not yet ubiquitous.
Ironically, I actually had a similar thing installing Go the other day. I'd never used Go before, and installed it using apt only to find that version was too old and I'd done it wrong.
Although in that case, it was a much quicker resolution than I think anyone fighting with virtual environments would have.
That's my experience. I'm not a Python developer, and installing Python programs has been a mess for decades, so I'd rather stay away from the language than try another new tool.
Over the years, I've used setup.py, pip, pipenv (which kept crashing though it was an official recommendation), manual venv+pip (or virtualenv? I vaguely remember there were 2 similar tools and none was part of a minimal Python install). Does uv work in all of these cases? The uv doc pointed out by the GP is vague about legacy projects, though I've just skimmed through the long page.
IIRC, Python tools didn't share their data across projects, so they could build the same heavy dependencies multiple times. I've also seen projects with incomplete dependencies (installed through Conda, IIRC) which were a major pain to get working. For many years, the only simple and sane way to run some Python code was in a Docker image, which has its own drawbacks.
pip and venv. The Python ecosystem has taken a huge step backwards with the preachy attitude that you have to do everything in a venv. Not when I want to have installable utility scripts usable from all my shells at any time or location.
I get that installing to the site-packages is a security vulnerability. Installing to my home directory is not, so why can't that be the happy path by default? Debian used to make this easy with the dist-packages split leaving site-packages as a safe sandbox but they caved.
I'm interpreting this as "uv was built off of years of PEPs", which is true; that being said the UX of `uv` is their own, and to me has significantly reduced the amount of time I spend thinking about requirements, modules, etc.
> IIRC, Python tools didn't share their data across projects, so they could build the same heavy dependencies multiple times.
One of the neatest features of uv is that it uses clever symlinking tricks so if you have a dozen different Python environments all with the same dependency there's only one copy of that dependency on disk.
Hard links, in fact. It's not hard to do, just (the Rust equivalent of) `os.link` in place of `os.copy` pretty much. The actually clever part is that the package cache actually contains files that can be used this way, instead of just having wheels and unpacking them from scratch each time.
For pip to do this, first it would have to organize its cache in a sensible manner, such that it could work as an actual download cache. Currently it is an HTTP cache (except for locally-built wheels), where it uses a vendored third-party library to simulate the connection to files.pythonhosted.org (in the common PyPI case). But it still needs to connect to pypi.org to figure out the URI that the third-party library will simulate accessing.
Coming from a mostly Java guy (since around 2001), I've been away from Python for a while and my two most recent work projects have been in Python and both switched to uv around the time I joined. Such a huge difference in time and pain - I'm with you here.
Python's always been a pretty nice language to work in, and uv makes it one of the most pleasant to deal with.
There's definitely a philosophical shift that you can observe happening over the last 12-15 years or so, where at the start you have the interpreter as the centre of the world and at the end there's an ecosystem management tool that you use to give yourself an interpreter (and virtual environments, and so on) per project.
I think this properly kicked off with RVM, which needed to come into existence because you had this situation where the Ruby interpreter was going through incompatible changes, the versions on popular distributions were lagging, and Rails, the main reason people were turning to Ruby, was relatively militant about which interpreter versions it would support. Also, building the interpreter such that it would successfully run Rails wasn't trivial. Not that hard, but enough that a convenience wrapper mattered. So you had a whole generation of web devs coming up in an environment where the core language wasn't the first touchpoint, and there wasn't an assumption that you could (or should) rely on what you could apt-get install on the base OS.
This is broadly an extremely good thing.
But the critical thing that RVM did was that it broke the circular dependency at the core of the problem: it didn't itself depend on having a working ruby interpreter. Prior to that you could observe a sort of sniffiness about tools for a language which weren't implemented in that language, but RVM solved enough of the pain that it barged straight past that.
Then you had similar tools popping up in other languages - nvm and leiningen are the first that spring to mind, but I'd also throw (for instance) asdf into the mix here - where the executable that you call to set up your environment has a '#!/bin/bash' shebang line.
Go has sidestepped most of this because of three things: 1) rigorous backwards compatibility; 2) the simplest possible installation onramp; 3) being timed with the above timeline so that having a pre-existing `go` binary provided by your OS is unlikely unless you install it yourself. And none of those are true of Python. The backwards compatibility breaks in this period are legendary, you almost always do have a pre-existing Python to confuse things, and installing a new python without breaking that pre-existing Python, which your OS itself depends on, is a risk. Add to that the sniffiness I mentioned (which you can still see today on `uv` threads) and you've got a situation where Python is catching up to what other languages managed a decade ago.
It is sort of funny, if we squint just the wrong way, “ecosystem management tool first, then think about interpreters” starts to look a lot like… a package manager, haha.
deps.edn is becoming the default choice, yes. I interpreted the parent comment as saying "you will see advice to use leiningen (even though newer solutions exist, simply because it _was_ the default choice when the articles were written)"
Only to those already steeped in Python. To an outsider they're all equally arbitrary non-descriptive words and there's not even obvious proper noun capitalization to tell apart a component from a tool brand.
It's always rather irritating to me that people make these complaints without trying to understand any of the under-the-hood stuff, because the ultimate conclusion is that it's somehow a bad thing that, on a FOSS project, multiple people tried to solve a problem concurrently.
That’s especially ironic given that inside Python part of the philosophy is “There should be one-- and preferably only one --obvious way to do it.” So why does Python’s external environment seem more like something that escape from a Perl zoo?
* People cling to memories of long-obsolete issues. When people point to XKCD 1987 they overlook that Python 2.x has been EOL for almost six years (and 3.6 for over four, but whatever)[1]; only Mac users have to worry about "homebrew" (which I understand was directly interfering with stuff back in the day) or "framework builds" of Python; easy_install is similarly a long-deprecated dinosaur that you also would never need once you have pip set up; and fewer and fewer people actually need Anaconda for anything[2][3].
* There is never just one way to do it, depending on your understanding of "do". Everyone will always imagine that the underlying functionality can be wrapped in a more user-friendly way, and they will have multiple incompatible ideas about what is the most user-friendly.
But there is one obvious "way to do it", which is to set up the virtual environment and then launch the virtual environment's Python executable. Literally everything else is window dressing on top of that. The only thing that "activating" the environment does is configure environment variables so that `python` means the virtual environment's Python executable. All your various alternative tools are just presenting different ways to ensure that you run the correct Python (under the assumption that you don't want to remember a path to it, I guess) and to bundle up the virtual environment creation with some other development task.
The Python community did explicitly provide for multiple people to provide such wrappers. This was not by providing the "15th competing standard". It was by providing the standard (really a set of standards designed to work together: the virtual environment support in the standard library, the PEPs describing `pyproject.toml`, and so on), which replaced a Wild West (where Setuptools was the sheriff and pip its deputy).
[0]: By the way, this is by someone who doesn't like virtual environments and was one of the biggest backers of PEP 582.
[1]: Of course, this is not Randall Munroe's fault. The comic dates to 2018, right in the middle of the period where the community was trying to sort things out and figure out how to not require the often problematic `setup.py` configuration for every project including pure-Python ones.
[2]: The SciPy stack has been installable from wheels for almost everyone for quite some time and they were even able to get 3.12 wheels out promptly despite being hamstrung by the standard library `distutils` removal.
[3]: Those who do need it, meanwhile, can generally live within that environment entirely.
The way I teach, I would start there; then you always have it as a fallback, and understand the system better.
I generally sort users into aspirants who really should learn those things (and will benefit from it), vs. complete end users who just want the code to run (for whom the developer should be expected to provide, if they expect to gain such a following).
I've moved over mostly to uv too, using `uv pip` when needed but mostly sticking with `uv add`. But as soon as you start using `uv pip` you end up with all the drawbacks of `uv pip`, namely that whatever you pass after can affect earlier dependency resolutions too. Running `uv pip install dep-a` and then `... dep-b` isn't the same as `... dep-b` first and then `... dep-a`, or the same as `uv pip install dep-a dep-b` which coming from an environment that does proper dependency resolution and have workspaces, can be really confusing.
This is more of a pip issue than uv though, and `uv pip` is still preferable in my mind, but seems Python package management will forever be a mess, not even the bandaid uv can fix things like these.
Ive been away from python for awhile now, I was under the impression uv was somehow solving this dependency hell. Whats the benefit of using uv/pip together? Speed?
As far as I can tell, `pip` by itself still doesn't even do something basic as resolving the dependency tree first, then download all the packages in parallel, as an basic example. The `uv pip` shim does.
And regardless if you use only uv, or pip-via-uv, or straight up pip, dependencies you install later steps over dependencies you installed earlier, and no tool so far seems to try to solve this, which leads me to conclude it's a Python problem, not a package manager problem.
There are really so many things about this point that I don't get.
First off, in my mind the kinds of things that are "scripts" don't have dependencies outside the standard library, or if they do are highly specific to my own needs on my own system. (It's also notable that one of the advantages the author cites for Go in this niche is a standard library that avoids the need for dependencies in quick scripts! Is this not one of Python's major selling points since day 1?)
Second, even if you have dependencies you don't have to learn differences between these tools. You can pick one and use it.
Third, virtual environments are literally just a place on disk for those dependencies to be installed, that contains a config file and some stubs that are automatically set up by a one-liner provided by the standard library. You don't need to go into them and inspect anything if you don't want to. You don't need to use the activation script; you can just specify the venv's executable instead if you prefer. None of it is conceptually difficult.
Fourth, sharing an environment for these quick scripts actually just works fine an awful lot of the time. I got away with it for years before proper organization became second nature, and I would usually still be fine with it (except that having an isolated environment for the current project is the easiest way to be sure that I've correctly listed its dependencies). In my experience it's just not a thing for your quick throwaway scripts to be dependent on incompatible Numpy versions or whatever.
... And really, to avoid ever having to think about the dependencies you provide dynamically, you're going to switch to a compiled language? If it were such a good idea, nobody would have thought of making languages like Python in the first place.
And uh...
> As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future. Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
The pseudo-shebang trick here isn't going to work on Windows any more than a conventional one is. And no, when I switched from Windows to Linux, getting my Python stuff to work was not a "steep annoying curve" at all. It came more or less automatically with acclimating to Linux in general.
(I guess referring to ".pyproject" instead of the actually-meaningful `pyproject.toml` is just part of the trolling.)
> Third, virtual environments are literally just a place on disk for those dependencies
I had a recent conversation with a colleague. I said how nice it is using uv now. They said they were glad because they hated messing with virtualenvs so much that preferred TypeScript now. I asked them what node_modules is, they paused for a moment, and replied “point taken”.
Uv still uses venvs because it’s the official way Python stores all the project packages in one place. Node/npm, Go/go, and Rust/cargo all do similar things, but I only really here people grousing about Python’s version, which as you say, you can totally ignore and never ever look at.
From my experience, it seems like a lot of the grousing is from people who don't like the "activation script" workflow and mistakenly think it's mandatory. Though I've also seen aesthetic objections to the environment actually having internal structure rather than just being another `site-packages` folder (okay; and what are the rules for telling Python to use it?)
I've heard those objections, too. I do get that specific complaint: it's another step you have to do. That said, things like direnv and mise make that disappear. I personally like the activation workflow and how explicit it is, as you're activating that specific venv, or maybe one in a different location if you want to use that instead. I don't like sprinkling "uv run ..." all over the place. But the nice part is that both of those work, and you can pick whichever one you prefer.
It'll be interesting to see how this all plays out with __pypackages__ and friends.
> But the nice part is that both of those work, and you can pick whichever one you prefer.
Yep. And so does the pyenv approach (which I understand involves permanently adding a relative path to $PATH, wherein the system might place a stub executable that invokes the venv associated with the current working directory).
And so do hand-made subshell-based approaches, etc. etc.
In "development mode" I use my activation-script-based wrappers. When just hacking around I generally just give the path to the venv's python explicitly.
....but you have to be able to get UV and on some platforms (e.g. a raspberry pi) it won't build because the version of rust is too old. So I wrote a script called "pv" in python which works a bit like uv - just enough to get my program to work. It made me laugh a bit, but it works anywhere, well enough for my usecase. All I had to do was embed a primitive AI generated TOML parser in it.
While this is true, it is often stunning to me how long it took to get to `uv run`.
I have worked with Python on and off for 20+ years and I _always_ dreaded working with any code base that had external packages or a virtual environment.
`uv run` changed that and I migrated every code base at my last job to it. But it was too late for my personal stuff - I already converted or wrote net new code in Go.
I am on the fence about Python long term. I’ve always preferred typed languages and with the advent of LLM-assisted coding, that’s even more important for consistency.
It's a UX issue. The author is correct — nobody cares about all the mumbo-jambo virtualenvs or whatever other techno-babble.
The user
just
wants
to run
the damn program.
> `uv run` and PEP 723 solved every single issue the author is describing.
PEP 723 eh? "Resolution: 08-Jan-2024"
Sure, so long as you somehow magically gain the knowledge to use uv, then you will have been able to have a normal, table-stakes experience for whole 2 years now. Yay, go Python ecosystem!
Is uv the default, officially recommended way to run Python? No? Remember to wave goodbye to all the users passing the language by.
Go has explicitly rejected adding shebang support, mandating this hack, due to being considered "abuse of resources"[0]. Rather `gorun`, which is also called a mistake by Pike, is recommended instead. And can alter this method so not to need to hardcode a path.
/// 2>/dev/null ; gorun "$0" "$@" ; exit $?
>Good-old posix magic. If you ask an LLM, it'll say it's due to Shebangs.
Well, ChatGPT gives same explanation as article, unsurprising considering this mechanic has been repeated many times.
>none other fits as well as Go
Nim, Zig, D, all have `-run` argument and can be used in similar way. Swift, OCaml, Haskell can directly execute a file, no need to provide an argument.
However... scripting requires (in my experience), a different ergonomic to shippable software. I can't quite put my finger on it, but bash feels very scriptable, go feels very shippable, python is somewhere in the middle, ruby is closer to bash, rust is up near go on the shippable end.
Good scripting is a mixture of OS-level constructs available to me in the syntax I'm in (bash obviously is just using OS commands with syntactic sugar to create conditional, loops and variables), and the kinds of problems where I don't feel I need a whole lot of tooling: LSPs, test coverage, whatever. It's languages that encourage quick, dirty, throwaway code that allows me to get that one-off job done the guy in sales needs on a Thursday so we can close the month out.
Go doesn't feel like that. If I'm building something in Go I want to bring tests along for the ride, I want to build a proper build pipeline somewhere, I want a release process.
I don't think I've thought about language ergonomics in this sense quite like this before, I'm curious what others think.
Talking about Python "somewhere in the middle" - I had a demo of a simple webview gtk app I wanted to run on vanilla Debian setup last night.. so I did the canonical-thing-of-the-month and used uv to instantiate a venv and pull the dependencies. Then attempted to run the code.. mayhem. Errors indicating that the right things were in place but that the code still couldn't run (?) and finally Python Core Dumped.. OK. This is (in some shape or form) what happens every single time I give Python a fresh go for an idea. Eventually Golang is more verbose (and I don't particularly like the mod.go system either) but once things compile.. they run. They don't attempt running or require xyz OS specific hack.
Gtk makes that simple python program way more complex since it'll need more than pure-python dependencies.
It's really a huge pain point in python. Pure python dependencies are amazingly easy to use, but there's a lot of packages that depend on either c extensions that need to be built or have OS dependencies. It's gotten better with wheels and manylinux builds, but you can still shoot your foot off pretty easily.
I'm pretty sure the gtk dependencies weren't built by Astral, which, yes, unfortunately means that it won't always just work, as they streamline their Python builds in... unusual ways. A few months ago I had a similar issue running a Tkinter project with uv, then all was well when I used conda instead.
I've had similar issues with anaconda, once upon a time. I've hit a critical roadblock that ruined my day with every single Python dependency/environment tool except basic venv + requirements.txt, I think. That gets in the way the least but it's also not very helpful, you're stuck with requirements.txt which tends to be error-prone to manage.
Have to disagree, "technically" yes, both are interpreted languages, but the ergonomics and mental overhead of doing certain things are wildly different:
In python, doing math or complex string or collection operations is usually a simple oneliner, but calling shell commands or other OS processes requires fiddling with the subprocess module, writing ad-hoc streaming loops, etc - don't even start with piping several commands together.
Bash is the opposite: As long as your task can be structured as a series of shell commands, it absolutely shines - but as soon as you require custom data manipulation in any form, you'll run into awkward edge cases and arbitrary restrictions - even for things that are absolutely basic in other languages.
The subprocess module is horrendous but even if it was great bash is simpler. I just think about trying to create a pipe of processes in python without the danger of blocking.
Maybe the ergonomics of writing code is less of a problem if you have a quick way of asking an LLM to do the edits? We can optimize for readability instead.
More specifically, for the readability of code written by an LLM.
I thought this was going to be a longer rant about how python needs to... Go away. Which, as a long time python programmer and contributor, and at one time avid proponent of the language, I would entertain the suggestion. I think all of ML being in Python is a collosal mistake that we'll pay for for years.
The main reasons being it is slow, its type system is significantly harder to use than other languages, and it's hard to distribute. The only reason to use it is inertia. Obviously inertia can be sufficient for many reasons, but I would like to see the industry consider python last, and instead consider typescript, go, or rust (depending on use case) as a best practice. Python would be considered deprecated and only used for existing codebases like pytorch. Why would you write a web app in Python? Types are terrible, it's slow. There are way better alternatives.
With that said... there is a reason why ML went with Python. GPU programming requires C-based libraries. NodeJS does not have a good FFI story, and neither does Rust or Go. Yes, there's support, but Python's FFI support is actually better here. Zig is too immature here.
The world deserves a Python-like language with a better type system, a better distribution system, and not nearly as much dynamism footguns / rope for people to hang themselves with.
C#/.Net? (Their too strong focus on worthless backwards compatibility and slow (very slow) development speed on basic language features not withstanding.)
Its type system is miles better than Python and it has some basic stuff Python doesn't have like block scope. Functional programming is also intentionally kind of a pain in Python with the limited lambdas.
If TypeScript had the awesome python stdlib and the Numpy/ML ecosystem I would use it over Python in a heartbeat.
Typescript also has significantly better performance. This is largely thanks to the browser wars funnelling an insane amount of engineering effort toward JavaScript engines over the last couple decades. Nodejs runs v8, which is the JavaScript engine used by chrome. And Bun uses JSC, written for safari.
For IO bound tasks, it also helps that JavaScript has a much simpler threading model. And it ships an event based IO system out of the box.
you can define a named closure in python, i do it from time to time, though it does seem to surprise others sometimes. i think maybe it's not too common.
Typescript is a really nice language even though it sits on a janky runtime. I’d love a subset of typescript that compiles to Go or something like that.
Typescript is ubiquitous in web, and there are some amazing new frameworks that reuse typescript types on the server and client (trpc, tanstack). It's faster (than python), has ergonomic types, and a massive community + npm ecosystem. Bun advances the state of the art for runtime performance (which anthropic just bought and use for Claude code).
Those are both valid reasons to use both languages. The "only" (whether true or not) is what the argument hinges on. It is roughly the same as saying that the only advantage of X is that it is popular, but Y is also popular and has additional advantages, therefore, Y is better than X. That is a valid argument, whether the premises are true or not.
I do not disagree but if you are going to say that "X" is only used because of "Y", maybe if you are pitching "Z" instead of "X" do not start with the "Y" :)
Typescript is a lot nicer than Python in many ways. Especially via Deno, and especially for scripting (imports work like people want!).
There are some things that aren't as good, e.g. Python's arbitrary precision integers are definitely nicer for scripting. And I'd say Python's list comprehension syntax is often quite nice even if it is weirdly irregular.
But overall Deno is a much better choice for ad-hoc scripting than Python.
I agree, but bigints are missing from json because the json spec defines all numbers as 64 bit floats. Any other kind of number in JSON is nonstandard.
JavaScript itself supports bigint literals just fine. Just put an ‘n’ after your number literal. Eg 0xffffffffffffffn.
There’s a whole bunch of features I wish we could go in and add to json. Like comments, binary blobs, dates and integers / bigints. It would be so much nicer to work with if it has that stuff.
"> I think all of ML being in Python is a colossal mistake that we'll pay for for years.
Market pressure. Early ML frameworks were in Lisp, then eventually Lua with Torch, but demand dictated the choice of Python because "it's simple" even if the result is cobbled together.
Lisp is arguably still the most suitable language for neural networks for a lot of reasons beyond the scope of this post, but the tooling is missing. I’m developing such a framework right now, though I have no illusions that many will adopt it. Python may not be elegant or efficient, but it's simple, and that's what people want.
Gee, I wonder why the tooling for ML in Lisp is missing even though the early ML frameworks were in Lisp. Perhaps there is something about the language that stifles truly wide collaboration?
I doubt it considering there are massive Clojure codebases with large teams collaborating on them every day. The lack of Lisp tooling and the prevalence of Python are more a result of inertia, low barrier to entry and ecosystem lock-in.
I'd love to replace Python with something simple, expressive, and strongly typed that compiles to native code. I have a habit of building little CLI tools as conveniences for working with internal APIs, and you wouldn't think you could tell a performance difference between Go and Python for something like that, but you can. After a year or so of writing these tools in Go, I went back to Python because the LOC difference is stark, but every time I run one of them I wish it was written in Go.
(OCaml is probably what I'm looking for, but I'm having a hard time getting motivated to tackle it, because I dread dealing with the tooling and dependency management of a 20th century language from academia.)
Have you tried Nim? Strong and static typed, versatile, compiles down to native code vía C, interops with C trivially, has macros and stuff to twist your brain if you're into that, and is trivially easy to get into.
That looks very interesting. The code samples look like very simple OO/imperative style code like Python. At first glance it's weird to me how much common functionality relies on macros, but it seems like that's an intentional part of the language design that users don't mind? I might give it a try.
You can replace Python with Nim. It checks literally all your marks (expressive, fast, compiled, strong-typing). It's as concise as Python, and IMO, Nim syntax is even more flexible.
Yes, Go can hardly be called statically typed, when they use the empty interface everywhere.
Yes, OCaml would be a decent language to look into. Or perhaps even OxCaml. The folks over at Jane Street have put a lot of effort into tooling recently.
I bounced off OCaml a few years ago because of the state of the tooling, despite it being almost exactly the language I was looking for.
I'm really happy with Gleam now, and recommended it over OCaml for most use cases.
I always assumed a runtime specialized for highly concurrent, fault-tolerant, long-running processes would have a noticeable startup penalty, which is one of the things that bothers me about Python. Is that something you notice with Gleam?
I suppose you could try typescript which can compile to a single binary using node or bun. Both bun and node do type stripping of ts types, and can compile a cli to a single file executable. This is what anthropic does for Claude code.
I swear the only the people who care about Python types are on Hacker News comments. I've never actually worked with or met someone who cared so much about it, and the ones that care at all seem just fine with type hints.
The people we happen to work with is an incredibly biased sample set of all software engineers.
As an example, almost everyone I’ve worked with in my career likes using macOS and Linux. But there are entire software engineering sub communities who stick to windows. For them, macOS is a quaint toy.
If you’ve never met or worked with people who care about typing, I think that says more about your workplace and coworkers than anything. I’ve worked with plenty of engineers who consider dynamic typing to be abhorrent. Especially at places like FAANG.
Long before typescript, before nodejs, before even “JavaScript the good parts”, Google wrote their own JavaScript compiler called Closure. The compiler is written in Java. It could do many things - but as far as I can tell, the main purpose of the compiler was to add types to JavaScript. Why? Because googlers would rather write a compiler from scratch than use a dynamically typed language. I know it was used to make the the early versions of Gmail. It may still be in use to this day.
How much does python really impact ml? All of the libraries are wrappers around C code that uses gpus any way, it's distributed and inference can be written in faster languages for serving anyway?
You're thinking only about the final step where we're just doing a bunch of matrix computation. The real work Python does in the ML world is automatic differentiation.
Python has multiple excellent options for this: JAX, Pytorch, Tensorflow, autograd, etc. Each of these libraries excels for different use cases.
I also believe these are cases where Python the language is part of the reason these libraries exist (whereas, to your point, for the matrix operations pretty much any language could implement these C wrappers). Python does make it easy to perform meta-programming and is very flexible when you need to manipulate the language itself.
Not really applicable to ML. The massive amount of compute running on the GPU is not executing in Python, and is basically the same regardless of host language.
You say he's narrow-minded, but you focus on the least relevant thing of everything he said, speed, and suggest that, somehow, something with "fast" in its name will fix it?
Speed is the least concern because things like numpy are written in C and the overhead you pay for is in the glue code and ffi. The lack of a standard distribution system is a big one. Dynamic typing works well for small programs and teams but does not scale when either dimension is increased.
But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
Very little of what you're claiming is relevant for FastAPI specifically, which in terms of speed isn't too far from an equivalent app written in Go for writing a web app. You need to research the specifics of a problem at hand instead of making broad but situationally incorrect assumptions. The subject here is web apps, and Python is very much a capable language in this niche as of the end of 2025, both in terms of speed, code elegance and support for static typing (FastAPI is fully based on Pydantic) - https://www.techempower.com/benchmarks/#section=test&runid=7...
> But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
A similar point was raised in the other python thread on cpython the other day, and I’m not sure I agree. For sure, it is far from trivial. However, GraalVM has shown us how it can be done for Java with generics. Highover, take the app, compile and run it. The compilation takes care of any literal use of Generics, running the app takes care of initialising classes and memory, instrumentation during runtime can be added to add runtime invocations of generics otherwise missed. Obviously, this takes a lot of details getting it right for it to work. But it can be done.
Implying that existence of your tool of preference in another programming language makes other equally impressive tools something akin to "[colossal] mistake that we'll pay for for years" "simply motivated by inertia" is way below the level of discussion I would expect from Hacker News.
I would have given the OOP the effort and due respect is formulating my response if it was phrased in the way you're describing. It's only fair that comments that strongly violate the norms of substantive discourse don't get a well-crafted response back.
I don’t really understand the initial impetus. I like scripting in Python. That’s one of the things it’s good at. You can extremely quickly write up a simple script to perform some task, not worrying about types, memory, yada yada yada. I don’t like using Python as the main language for a large application.
This phrasing sounds contradictory to me. The whole idea of scripts is that there's nothing to install (besides one standard interpreter). You just run them.
You don't understand the concept of people running software written by other people?
One of my biggest problems with python happens to be caused by the fact that a lot of freecad is written in python, and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore ) and the env variable that is supposed to disable that STUPID behavior has no effect because freecad is an appimage and my env variable is not propagating to the environment set up by freecad for itself.
That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.
> That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.
What I don't understand is why you call it a "script".
> and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore )
You're expected to do that anyway; it's part of the standard "Python project" .gitignore files offered by many sources (including GitHub).
But you mean that the repo contains plugins that FreeCAD will import? Because otherwise I can't fathom why it's executing .py files that are within your repo.
Anyway, this seems like a very tangential rant. And this is essentially the same thing as Java producing .class files; I can't say I run into a lot of people who are this bothered by it.
Inline script metadata itself is not tied to uv because it's a Python standard. I think the association between the two comes from people discovering ISM through uv and from their simultaneous rise.
pipx can run Python scripts with inline script metadata. pipx is implemented in Python and packaged by Linux distributions, Free/Net/OpenBSD, Homebrew, MacPorts, and Scoop (Windows): https://repology.org/project/pipx/versions.
Perhaps a case for standardizing on an executable name like `python-script-runner` that will invoke uv, pipx, etc. as available and preferred by the user. Scripts with inline metadata can put it in the shebang line.
I get the impression that others didn't really understand your / the OP's idea there. You mean that the user should locally configure the machine to ensure that the standardized name points at something that can solve the problem, and then accepts the quirks of that choice, yes?
A lot of people seem to describe a PEP 723 use case where the recipient maybe doesn't even know what Python is (or how to check for a compatible version), but could be instructed to install uv and then copy and run the script. This idea would definitely add friction to that use case. But I think in those cases you really want to package a standalone (using PyInstaller, pex, Briefcase or any of countless other options) anyway.
It seems to be Linux specific (does it even work on other unix like OSes?) and Linux usually has a system Python which is reasonably stable for things you need scripting for, whereas this requires go to be installed.
You could also use shell scripting or Python or another scripting language. While Python is not great at backward compatibility most scripts will have very few issues. Shell scripts are backward compatible as are many other scripting languages are very backward compatible (e.g. TCL) and they areG more likely to be preinstalled. If you are installing Go you could just install uv and use Python.
The article does say "I started this post out mostly trolling" which is part of it, but mostly the motivation would be that you have a strong preference for Go.
This is more than just trivially true for Python in a scripting context, too, because it doesn’t do things like type coercion that some other scripting languages do. If you want to concat an int with a string you’ll need to cast the int first, for example. It also has a bunch of list-ish and dict-ish built in types that aren’t interchangeable. You have to “worry about types” more in Python than in some of its competitors in the scripting-language space.
While you are at it, might as well do this for C++ or assembly. You hate scripting so much and would rather go to great lengths to use a complied language and throw away all the benefits of a scripting language and scripting itself, just because you don't like the language, not because of technical merit. Congratulations, you just wasted yourself many hours of precious time.
> The price of convenience is difficulties to scale
Of course, they never scale. The moment you start thinking about scaling, you should stop writing code as throwaway scripts but build them properly. That's not an argument to completely get rid of Python or bash. The cost of converting Python code to Go is near zero these days if there is a need to do so. Enough has been said about premature optimization.
> Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
If you need 10 libraries of certain versions to run a few lines of Python code, nobody calls that a script anymore. It becomes a proper project that requires proper package management, just like Go.
There is a much larger gap in language ergonomics between python and C++ than between python and golang. Compile time and package management being some of the major downsides to C++.
"You'd rather drive a compact car than an SUV? Might as well drive a motorcycle then!"
The main problem with python for system scripts is that's even in that domain it's not a very good choice.
Perl is right there, requires no installation, and is on practically every unix-like under the sun. Sure it's not a good language, or performant, or easy to extend, but neither is python, so who cares. And, if anything, it's a bit more expressive and compact than python, maybe to a fault.
> The one big problem: gopls. We need the first line of the script to be without spaces...
Specifically the problem here is automated reformatting. Gopls typically does this on save as you are editing, but it is good practice for your CI system to enforce the invariant that all merged *.go files are canonically formatted. This ensures that the user who makes a change formats it (and is blamed for that line), instead of the hapless next person to touch some other spot in that file. It also reduces merge conflicts.
But there's a second big (bigger) problem with this approach: you can't use a go.mod file in a one-off script, and that means you can't specify versions of your dependencies, which undermines the appeal to compatibility that motivated your post:
> The primary benefit of go-scripting is [...] and compatibility guarantees. While most languages aims to be backwards compatible, go has this a core feature. The "go-scripts" you write will not stop working as long as you use go version 1.*, which is perfect for a corporate environment.
> In addition to this, the compatibility guarantees makes it much easier to share "scripts". As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future.
Expected a rant, got a life-pro-tip. Enough for a good happy new year.
That said, we can abuse the same trick for any languages that treats `//` as comment.
List of some practical(?) languages: C/C++, Java, JavaScript, Rust, Swift, Kotlin, ObjC, D, F#, GLSL/HLSL, Groovy
Personally, among those languages, GLSL sounds most interesting. A single-GLSL graphics demo is always inspiring. (Something like https://www.shadertoy.com/ )
Also, let’s not forget that we can do something similar using block comment(`/* … */`). An example in C:
For Swift there’s even a project[1] that allows running scripts that have external dependencies (posting the fork because the upstream is mostly dead).
I think it’s uv’s equivalent, but for Swift.
(Also Swift specifically supports an actual shebang for Swift scripts.)
I love it. I'm using Go to handle building full stack javascript apps, which actually works great since esbuild can be used directly inside a Go program. The issue is that it's a dependency, so I settled for having a go mod file and running it directly with Go. If somehow these dependencies could be resolved without an explicit module configured (say, it was inline in the go file itself) it would be perfect. Alas, it will probably never happen.
That being said...use Go for scripting. It's fantastic. If you don't need any third party libraries this approach seems really clean.
I make computers do things, but I never act like my stuff is the only stuff that makes things happen. There is a huge software stack of which my work is just the final pieces.
The problem with calling it “full stack” (even if it has a widely understood meaning) is that it implicitly puts the people doing the actual lower-level work on a pedestal. It creates the impression that if this is already “full stack,” then things like device drivers, operating systems, or foundational libraries must be some kind of arcane magic reserved only for experts, which they aren’t.
The term “full stack” works fine within its usual context, but when viewed more broadly, it becomes misleading and, in my opinion, problematic.
Or, alternatively, it ignores and devalues the existence of these parts. In both cases, it's a weird "othering" of software below a certain line in the, ahem, full stack.
And it's okay. It doesn't mean it should be this way for everyone else.
It is pretty common (and been so for at least two decades) for web devs to differentiate like so: backend, frontend or both. This "both" part almost always is replaced by "full stack".
When people say this they just mean they do both parts of a web app and have no ill will or neglect towards systems programmers or engineers working on a power plant.
I made one of these too! I decided not to use // because I use gofmt auto formatting in my editor and it puts a space between the // and the usr. This one isn't changed by gofmt:
That's explicit support rather than using the same // hack. The language is specifically ignoring a shebang even though it doesn't match the usual comment syntax.
> Sidetrack: I actually looked up what the point of arg0 even is since I failed to find any usecases some months back and found this answer.
I think arg0 was always useful especially when developing multifunctional apps like busybox that changes its behavior depending on the name it was executed as.
In a similar way I changed all of my build and deployment scripts to Go not long ago. The actual benefit was utility functions used by the service could be shared in deployment. So I could easily share code to determine if services/dbs were online or to access cloud secrets in a uniform way. It also improved all the error checks to be much clearer (did the curl fail because it’s offline or malformed).
Additionally, it is even more powerful when used with go modules. Make every script call a single function in the shared “scripts” module and they will all be callable from anywhere symmetrically. This will ensure all scripts build even if they aren’t run all the time. It also means any script can call scripts.DeployService(…) and they don’t care what dir they are in, or who calls it. The arguments make it clear what paths/configuration is needed for each script.
Cute trick! I pointlessly wondered if I could make it work with Ruby and you kinda can, if you can tolerate a single error message before the script runs (sadly # comments don't work as shells consider them comments too):
=begin
ruby $0; exit
=end
puts "Hello from Ruby"
Not immediately useful, but no doubt this trick will pop up at some random moment in the future and actually be useful. Very basic C99 too, though I'm not sure I'd want to script with it(!):
You don't even need to end the file in `.go` or the like when using shebangs, and any self-respecting editor will be good at parsing out shebangs to identify file types (... well, Emacs seems to do it well enough for me)
no need to name your program foo.go when you could just name it foo
Tcc even supports that with `#!/usr/local/bin/tcc -run`, although I don't understand people who use c or go for "scripting", when python, ruby, TCL or perl have much superior ergonomics.
This was a relatively old project that used a C program as build system / meta generator. All you needed was a working C compiler (and your shell to execute the first line). From there, it built and ran a program that generated various tables and some source code, followed by compiling the actual program. The final program used a runtime reflection system, which was set up by the generated tables and code from the first stage.
The main reason was to do all this without any dependencies beyond a C compiler and some POSIX standard library.
I've tried Go scripting but would still still prefer python (uv is a game changer tbh). My go-to for automation will always be powershell (on linux) though. It's too bad PowerShell has the MSFT ick keeping people away from adopting it for automation. I can convince you to give it a try if you let me
> don't want to have virtual environments and learn what the difference between pip, poetry and uv is
Oh come on, it's easy:
Does the project have a setup.py? if so, first run several other commands before you can run it. python -m venv .venv && source .venv/bin/activate && pip install -e .
else does it have a requirements.txt? if so python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run ...
else does it have a pipfile? pipenv install and then prefix all commands with pipenv run ...
else does it have an environment.yml? if so conda env create -f environment.yml and then look inside the file and conda activate <environment_name>
else does it have a uv.ock? then uv sync (or uv pip install -e .) and then prefix commands with uv run.
If you've checked out a repo or unpacked a tarball without documentation, sure.
If you got it from PyPI or the documentation indicates you can do so, then you just use your tooling of choice.
Also, the pip+venv approach works fine with pyproject.toml, which was designed for interoperability. Poetry is oriented towards your own development, not working with someone else's project.
Speaking of which, a project that has a pipfile, environment.yml, uv.lock etc. and doesn't have pyproject.toml is not being seriously distributed. If this is something internal to your team, then you should already know what to do anyway.
It is not "no true scotsman" to point out that tons of projects are put on GitHub etc. without caring about whether others will actually be able to download and "install" and use the code locally, and that it's unreasonable to expect ecosystems to handle those cases by magic. To the extent that a Python ecosystem exists and people understand development within that ecosystem, the expectations for packaging are clear and documented and standard.
Acting as if these projects using whatever custom tool (and its associated config, by which the tool can be inferred), where that tool often isn't even advertised as an end-user package installer, are legitimate distributions is dishonest; and acting as if it reflects poorly on Python that this is possible, far more so. Nothing prevents anyone from creating a competitor to npm or Cargo etc.
Related tangent: I recently learned about Mise^1 -- a tool for managing multiple language runtime versions. It might ease some of the python environment setup/mgmt pains everyone complains about. It apparently integrates with uv, and can do automatic virtualenv activation....
One thing I hate about Python executables, at least the ones I've seen installed in Debian/Ubuntu is that the ones in /usr/bin are wrappers to execute somewhere in your site-packages.
I just want to see the full script where I execute it.
> Did you (rightfully) want to tear your eyes out when some LLM suggested that you script with .mjs?
I respectfully disagree with this sentiment. JS is a fantastic Python replacement for scripts. Node.js has added all kinds of utility functions that help you write scripts without needing external dependencies. Bun, Deno, and Node.js can execute TS files (if you want to bring types into the mix). All 3 runtimes are sufficiently performant. If you do end up needing external dependencies, they're only a package.json away. I write all my scripts in JS files these days.
For another excellent scripting solution that has
- fast startup (no compilation)
- uses a real language
- easy to upgrade beyond a script
- has tons of excellent dependencies baked-in
Look no further than babashka! It’s a clojure interpreter that has first class support scripting stuff. Great built in libs for shelling out to other programs, file management, anything http related (client and server), parsing, html building, etc.
Babashka is my go-to tool for starting all new projects now. It has mostly everything you need. And if it’s missing anything, it has some of the most interesting and flexible dependency management of any runtime I’ve ever seen. Via the “pod protocol” any process (written in go/rust/java whatever) can be exposed as a babashka dependency and bundled straight in. And no separate “install dependencies” command is required, it installs and caches things as needed.
And of course you keep all of the magic of REPL based development. It’s got built in nrepl support, so just by adding on ‘—nrepl-server 7888’ to your command, you can connect to it from your editor and edit the process live. I’m building my personal site this way and it’s just SO nice.
Sorry for the rant but when superior scripting solutions come up, I have to spread the love for bb. It’s too good to not talk about!!
The blog says that in regard to finding bash with env. My reading is that it does not make the same claim regarding finding go with env. bash is commonly found at /bin/bash (or a symlink there exists) as it is widely used in scripts and being available at that path is a well known requirement for compatibility. Go does not so much have a conical path and I have personally installed it at a variety of paths over the years (with the majority working with env). While I agree with the author of the blog that using env to find bash may or may not improve compatibility, I also agree with the parent comment that using env to find go probably does improve compatibility.
You can use https://github.com/erning/gorun as a Go script runner. It lets you embed `go.mod` and `go.sum` and have dependencies in Go scripts. This is more verbose than Python's inline script metadata and requires manual management of checksums. gorun caches built binaries, so scripts start quickly after the first time.
So the entire reason why this is not a "real" shebang and instead takes the roundtrip through the shell is because the Go runtime would trip over the # character?
I think this points to some shortcomings of the shebang mechanism itself: That it expects the shebang line to be present and adhering a specific structure - but then passes the entire file with the line to the interpreter where the interpreter has to process (and hopefully ignore) the line again.
I know that situations where one piece of text is parsed by multiple different systems are intellectually interesting and give lots of opportunities for cleverness - but I think the straightforward solution would be to avoid such situations.
So maybe the linux devs should consider adding a new form for the shebang where the first line is just stripped before passing the file contents to the interpreter.
Yep, this is a common misunderstanding, and the blog post itself repeats it.
The only way to "pass the file contents" would be through the standard input stream, but the script might want to use stdin like normal, so this isn't an option.
> Sidetrack: I actually looked up what the point of arg0 even is since I failed to find any usecases some months back and found this answer[0]. Confused, and unsatisfied by the replies, I gave up trying to understand "why arg0?" as some sort of legacy functionality.
I struggle to think of how the answers provided here could be clearer or more satisfactory. Why write an article if you're going to half-ass your research? Why even mention this nothingburger sidetrack at all...? (Bait?)
> go run does not properly return the script error code back to the operating system and this is important for scripts, because error codes are one of the most common ways multiple scripts interact with each other and the operating system environment.
Tangent but... I kinda like the Python language. What I don't like about Python is the way environments are managed.
This is something I generally believe, but I think it's particularly important for things like languages and runtimes: the idea of installing things "on" the OS or the system needs to die.
Per-workspace or per-package environment the way Go, Rust, etc. does it is correct. Installing packages globally is wrong.
There should not be such a thing as "globally." Ideally the global OS should be immutable or nearly so, with the only exception being maybe hardware driver stuff.
(Yes I know there's stuff like conda, but that's yet another thing to fix a fundamentally broken paradigm.)
> This is something I generally believe, but I think it's particularly important for things like languages and runtimes: the idea of installing things "on" the OS or the system needs to die.
Python has been trying to kill it for years; or at least, the Linux distros have been seeking Python's help in killing it on Linux for years. https://peps.python.org/pep-0668/ is the latest piece of this.
I feel like this principle could be codified as "the system is not a workspace."
The use of the system as a workspace goes back to when computers were either very small and always personal only to one user, or when they were very big and administrated by dedicated system administrators who were the only ones with permission to install things. Both these conditions are obsolete.
But the system is not a workspace acts like resources are free. Everything that’s wrong with a modern computer being slower than one from 30 years ago at running user applications has its roots in this kind of thing. It’s more obvious on mobile devices but desktops still suffer. Android needs more RAM and had worse power utilization until a lot was done to move toward native compiled code and background process control. Meanwhile Electron apps think it’s okay to run multiple copies of Javascript environments like working RAM is free and performance isn’t hurt.
I remember when I first experienced golang, I tried compiling it.
The compilation command returned immediately, and I thought it had failed. So I tried again and same result. WTF? I thought to myself. Till I did an `ls` and saw an `a.out` sitting in the directory. I was blown away by how fast the golang compiler was.
>This second method is, by the way, argued to increase compatibility as we utilize env to locate bash, which may not be located at /bin/bash. How true this is, is a topic I dare not enter.
At least it seems important on NixOS, I had to rewrite a few shebangs on some scripts that used /bin/bash and didn't work on NixOS.
Using `uv` with python is significantly safer and better. At least you get null safety. Sure, you can't run at the speed of light, but at least you can have some decent non-halfarsed-retrofitted type checking in your script.
In what way does Python have more null safety than Go? Using None will cause exceptions in basically all the same places using nil will cause panics in Go, and Python similarly lacks the usual null-safe operators like traversal (?.), coalescing (??), etc.
You can abuse the falsity of None to do things like `var or ""`, but this ground gets quite shaky when real bools get involved.
Try the following in sh:
////////usr/local/go/bin/go
Well, how about this: I use ruby or python. And not shell.
Somehow I have been doing so since +25 years. Never regretted it.
Never really needed shell either. (Ok, that's not entirely true;
I refer to shell scripts. I do use bash as my primary shell, largely
due to simplicity; I don't use shell scripts though, save for keeping
a few legacy ones should I be at a computer that has no support for
ruby, python or perl. But this is super-rare nowadays.)
At the time, Poetry and Pipenv were the popular tools, but I found they were not sufficient; they did a good job abstracting dependencies, but not venvs and Python version.
That in retrospective was what made rye temporarily attractive and popular.
^mostly, some defs might have StackOverflow copy/pasta
I'm sure the documentation of this featureset highlights what I'm about to say but if you're attracted to the simplicity of writing Python projects who are initialized using this method, do not use this code in staging/prod.
If you don't see why this is not production friendly it's for the simple a good.reaaon that creating deployable artifacts packaging a project or a dependency of a project this uses this method, creating reproducible builds becomes impossible.
This will also lead to builds that pass your CI but fail to run in their destination environment and vice versa due to the fact that they download heir dependencies on the fly.
There may be workarounds and I know nothing of this feature so investigate yourself if you must.
My two cents.
(this is a joke btw)
What you meant was, "you don't need python pre-installed". This does not solve the problem of not wanting to have (or limited from having) python installed.
If you've never used Clojure and start a Clojure project, you will almost definitely find advice telling you to use Leiningen.
For Python, if you search online you might find someone saying to use uv, but also potentially venv, poetry or hatch. I definitely think uv is taking over, but its not yet ubiquitous.
Ironically, I actually had a similar thing installing Go the other day. I'd never used Go before, and installed it using apt only to find that version was too old and I'd done it wrong.
Although in that case, it was a much quicker resolution than I think anyone fighting with virtual environments would have.
Over the years, I've used setup.py, pip, pipenv (which kept crashing though it was an official recommendation), manual venv+pip (or virtualenv? I vaguely remember there were 2 similar tools and none was part of a minimal Python install). Does uv work in all of these cases? The uv doc pointed out by the GP is vague about legacy projects, though I've just skimmed through the long page.
IIRC, Python tools didn't share their data across projects, so they could build the same heavy dependencies multiple times. I've also seen projects with incomplete dependencies (installed through Conda, IIRC) which were a major pain to get working. For many years, the only simple and sane way to run some Python code was in a Docker image, which has its own drawbacks.
Yes. The goal of uv is to defuck the python ecosystem and they're doing a very good job at it so far.
I only work a little bit with python.
I get that installing to the site-packages is a security vulnerability. Installing to my home directory is not, so why can't that be the happy path by default? Debian used to make this easy with the dist-packages split leaving site-packages as a safe sandbox but they caved.
The brilliant part about venvs is that A and B can have their completely separate mutually incompatible environments.
uv has replaced that for me, and has replaced most other tools that I used with the (tiny amount of) Python that I write for production.
One of the neatest features of uv is that it uses clever symlinking tricks so if you have a dozen different Python environments all with the same dependency there's only one copy of that dependency on disk.
For pip to do this, first it would have to organize its cache in a sensible manner, such that it could work as an actual download cache. Currently it is an HTTP cache (except for locally-built wheels), where it uses a vendored third-party library to simulate the connection to files.pythonhosted.org (in the common PyPI case). But it still needs to connect to pypi.org to figure out the URI that the third-party library will simulate accessing.
Before uv came along I was starting to write stuff in Go that I’d normally write in Python.
Python's always been a pretty nice language to work in, and uv makes it one of the most pleasant to deal with.
I think this properly kicked off with RVM, which needed to come into existence because you had this situation where the Ruby interpreter was going through incompatible changes, the versions on popular distributions were lagging, and Rails, the main reason people were turning to Ruby, was relatively militant about which interpreter versions it would support. Also, building the interpreter such that it would successfully run Rails wasn't trivial. Not that hard, but enough that a convenience wrapper mattered. So you had a whole generation of web devs coming up in an environment where the core language wasn't the first touchpoint, and there wasn't an assumption that you could (or should) rely on what you could apt-get install on the base OS.
This is broadly an extremely good thing.
But the critical thing that RVM did was that it broke the circular dependency at the core of the problem: it didn't itself depend on having a working ruby interpreter. Prior to that you could observe a sort of sniffiness about tools for a language which weren't implemented in that language, but RVM solved enough of the pain that it barged straight past that.
Then you had similar tools popping up in other languages - nvm and leiningen are the first that spring to mind, but I'd also throw (for instance) asdf into the mix here - where the executable that you call to set up your environment has a '#!/bin/bash' shebang line.
Go has sidestepped most of this because of three things: 1) rigorous backwards compatibility; 2) the simplest possible installation onramp; 3) being timed with the above timeline so that having a pre-existing `go` binary provided by your OS is unlikely unless you install it yourself. And none of those are true of Python. The backwards compatibility breaks in this period are legendary, you almost always do have a pre-existing Python to confuse things, and installing a new python without breaking that pre-existing Python, which your OS itself depends on, is a risk. Add to that the sniffiness I mentioned (which you can still see today on `uv` threads) and you've got a situation where Python is catching up to what other languages managed a decade ago.
Again.
I thought the current best practice for Clojure was to use the shiny new built-in tooling? deps.edn or something like that?
This is sort of like saying "You might find someone saying to drive a Ford, but also potentially internal combustion engine, Nissan or Hyundai".
But with much more detail: it seems complicated because
* People refuse to learn basic concepts that are readily explained by many sources; e.g. https://chriswarrick.com/blog/2018/09/04/python-virtual-envi... [0].
* People cling to memories of long-obsolete issues. When people point to XKCD 1987 they overlook that Python 2.x has been EOL for almost six years (and 3.6 for over four, but whatever)[1]; only Mac users have to worry about "homebrew" (which I understand was directly interfering with stuff back in the day) or "framework builds" of Python; easy_install is similarly a long-deprecated dinosaur that you also would never need once you have pip set up; and fewer and fewer people actually need Anaconda for anything[2][3].
* There is never just one way to do it, depending on your understanding of "do". Everyone will always imagine that the underlying functionality can be wrapped in a more user-friendly way, and they will have multiple incompatible ideas about what is the most user-friendly.
But there is one obvious "way to do it", which is to set up the virtual environment and then launch the virtual environment's Python executable. Literally everything else is window dressing on top of that. The only thing that "activating" the environment does is configure environment variables so that `python` means the virtual environment's Python executable. All your various alternative tools are just presenting different ways to ensure that you run the correct Python (under the assumption that you don't want to remember a path to it, I guess) and to bundle up the virtual environment creation with some other development task.
The Python community did explicitly provide for multiple people to provide such wrappers. This was not by providing the "15th competing standard". It was by providing the standard (really a set of standards designed to work together: the virtual environment support in the standard library, the PEPs describing `pyproject.toml`, and so on), which replaced a Wild West (where Setuptools was the sheriff and pip its deputy).
[0]: By the way, this is by someone who doesn't like virtual environments and was one of the biggest backers of PEP 582.
[1]: Of course, this is not Randall Munroe's fault. The comic dates to 2018, right in the middle of the period where the community was trying to sort things out and figure out how to not require the often problematic `setup.py` configuration for every project including pure-Python ones.
[2]: The SciPy stack has been installable from wheels for almost everyone for quite some time and they were even able to get 3.12 wheels out promptly despite being hamstrung by the standard library `distutils` removal.
[3]: Those who do need it, meanwhile, can generally live within that environment entirely.
The way I teach, I would start there; then you always have it as a fallback, and understand the system better.
I generally sort users into aspirants who really should learn those things (and will benefit from it), vs. complete end users who just want the code to run (for whom the developer should be expected to provide, if they expect to gain such a following).
This is more of a pip issue than uv though, and `uv pip` is still preferable in my mind, but seems Python package management will forever be a mess, not even the bandaid uv can fix things like these.
And regardless if you use only uv, or pip-via-uv, or straight up pip, dependencies you install later steps over dependencies you installed earlier, and no tool so far seems to try to solve this, which leads me to conclude it's a Python problem, not a package manager problem.
First off, in my mind the kinds of things that are "scripts" don't have dependencies outside the standard library, or if they do are highly specific to my own needs on my own system. (It's also notable that one of the advantages the author cites for Go in this niche is a standard library that avoids the need for dependencies in quick scripts! Is this not one of Python's major selling points since day 1?)
Second, even if you have dependencies you don't have to learn differences between these tools. You can pick one and use it.
Third, virtual environments are literally just a place on disk for those dependencies to be installed, that contains a config file and some stubs that are automatically set up by a one-liner provided by the standard library. You don't need to go into them and inspect anything if you don't want to. You don't need to use the activation script; you can just specify the venv's executable instead if you prefer. None of it is conceptually difficult.
Fourth, sharing an environment for these quick scripts actually just works fine an awful lot of the time. I got away with it for years before proper organization became second nature, and I would usually still be fine with it (except that having an isolated environment for the current project is the easiest way to be sure that I've correctly listed its dependencies). In my experience it's just not a thing for your quick throwaway scripts to be dependent on incompatible Numpy versions or whatever.
... And really, to avoid ever having to think about the dependencies you provide dynamically, you're going to switch to a compiled language? If it were such a good idea, nobody would have thought of making languages like Python in the first place.
And uh...
> As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future. Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
The pseudo-shebang trick here isn't going to work on Windows any more than a conventional one is. And no, when I switched from Windows to Linux, getting my Python stuff to work was not a "steep annoying curve" at all. It came more or less automatically with acclimating to Linux in general.
(I guess referring to ".pyproject" instead of the actually-meaningful `pyproject.toml` is just part of the trolling.)
I had a recent conversation with a colleague. I said how nice it is using uv now. They said they were glad because they hated messing with virtualenvs so much that preferred TypeScript now. I asked them what node_modules is, they paused for a moment, and replied “point taken”.
Uv still uses venvs because it’s the official way Python stores all the project packages in one place. Node/npm, Go/go, and Rust/cargo all do similar things, but I only really here people grousing about Python’s version, which as you say, you can totally ignore and never ever look at.
The very long discussion (https://discuss.python.org/t/pep-582-python-local-packages-d...) of PEP 582 (https://peps.python.org/pep-0582/ ; the "__pypackages__" folder proposal) seems relevant here.
It'll be interesting to see how this all plays out with __pypackages__ and friends.
Yep. And so does the pyenv approach (which I understand involves permanently adding a relative path to $PATH, wherein the system might place a stub executable that invokes the venv associated with the current working directory).
And so do hand-made subshell-based approaches, etc. etc.
In "development mode" I use my activation-script-based wrappers. When just hacking around I generally just give the path to the venv's python explicitly.
The standard recommendation for this is `tomli`, which became the basis of the standard library `tomllib` in 3.11.
So this is skill issue, the blog post. `uv run` and PEP 723 solved every single issue the author is describing.
I have worked with Python on and off for 20+ years and I _always_ dreaded working with any code base that had external packages or a virtual environment.
`uv run` changed that and I migrated every code base at my last job to it. But it was too late for my personal stuff - I already converted or wrote net new code in Go.
I am on the fence about Python long term. I’ve always preferred typed languages and with the advent of LLM-assisted coding, that’s even more important for consistency.
The user
just
wants
to run
the damn program.
> `uv run` and PEP 723 solved every single issue the author is describing.
PEP 723 eh? "Resolution: 08-Jan-2024"
Sure, so long as you somehow magically gain the knowledge to use uv, then you will have been able to have a normal, table-stakes experience for whole 2 years now. Yay, go Python ecosystem!
Is uv the default, officially recommended way to run Python? No? Remember to wave goodbye to all the users passing the language by.
Well, ChatGPT gives same explanation as article, unsurprising considering this mechanic has been repeated many times.
>none other fits as well as Go
Nim, Zig, D, all have `-run` argument and can be used in similar way. Swift, OCaml, Haskell can directly execute a file, no need to provide an argument.
[0]: https://groups.google.com/d/msg/golang-nuts/iGHWoUQFHjg/_dbL...
https://github.com/traefik/yaegi
However... scripting requires (in my experience), a different ergonomic to shippable software. I can't quite put my finger on it, but bash feels very scriptable, go feels very shippable, python is somewhere in the middle, ruby is closer to bash, rust is up near go on the shippable end.
Good scripting is a mixture of OS-level constructs available to me in the syntax I'm in (bash obviously is just using OS commands with syntactic sugar to create conditional, loops and variables), and the kinds of problems where I don't feel I need a whole lot of tooling: LSPs, test coverage, whatever. It's languages that encourage quick, dirty, throwaway code that allows me to get that one-off job done the guy in sales needs on a Thursday so we can close the month out.
Go doesn't feel like that. If I'm building something in Go I want to bring tests along for the ride, I want to build a proper build pipeline somewhere, I want a release process.
I don't think I've thought about language ergonomics in this sense quite like this before, I'm curious what others think.
It's really a huge pain point in python. Pure python dependencies are amazingly easy to use, but there's a lot of packages that depend on either c extensions that need to be built or have OS dependencies. It's gotten better with wheels and manylinux builds, but you can still shoot your foot off pretty easily.
No, bash is technically not "more" OS than e.g. Python. It just happens that bash is (often) the default shell in the terminal emulator.
In python, doing math or complex string or collection operations is usually a simple oneliner, but calling shell commands or other OS processes requires fiddling with the subprocess module, writing ad-hoc streaming loops, etc - don't even start with piping several commands together.
Bash is the opposite: As long as your task can be structured as a series of shell commands, it absolutely shines - but as soon as you require custom data manipulation in any form, you'll run into awkward edge cases and arbitrary restrictions - even for things that are absolutely basic in other languages.
More specifically, for the readability of code written by an LLM.
The main reasons being it is slow, its type system is significantly harder to use than other languages, and it's hard to distribute. The only reason to use it is inertia. Obviously inertia can be sufficient for many reasons, but I would like to see the industry consider python last, and instead consider typescript, go, or rust (depending on use case) as a best practice. Python would be considered deprecated and only used for existing codebases like pytorch. Why would you write a web app in Python? Types are terrible, it's slow. There are way better alternatives.
With that said... there is a reason why ML went with Python. GPU programming requires C-based libraries. NodeJS does not have a good FFI story, and neither does Rust or Go. Yes, there's support, but Python's FFI support is actually better here. Zig is too immature here.
The world deserves a Python-like language with a better type system, a better distribution system, and not nearly as much dynamism footguns / rope for people to hang themselves with.
Why replace a nice language like python with anything coming out of javascript?
If TypeScript had the awesome python stdlib and the Numpy/ML ecosystem I would use it over Python in a heartbeat.
For IO bound tasks, it also helps that JavaScript has a much simpler threading model. And it ships an event based IO system out of the box.
Has shortcomings like all languages but it brought a lot of advanced programming language concepts to the masses!
> The only reason to use it is inertia
and
> Typescript is ubiquitous in web
:-)
There are some things that aren't as good, e.g. Python's arbitrary precision integers are definitely nicer for scripting. And I'd say Python's list comprehension syntax is often quite nice even if it is weirdly irregular.
But overall Deno is a much better choice for ad-hoc scripting than Python.
JavaScript itself supports bigint literals just fine. Just put an ‘n’ after your number literal. Eg 0xffffffffffffffn.
There’s a whole bunch of features I wish we could go in and add to json. Like comments, binary blobs, dates and integers / bigints. It would be so much nicer to work with if it has that stuff.
Market pressure. Early ML frameworks were in Lisp, then eventually Lua with Torch, but demand dictated the choice of Python because "it's simple" even if the result is cobbled together.
Lisp is arguably still the most suitable language for neural networks for a lot of reasons beyond the scope of this post, but the tooling is missing. I’m developing such a framework right now, though I have no illusions that many will adopt it. Python may not be elegant or efficient, but it's simple, and that's what people want.
(OCaml is probably what I'm looking for, but I'm having a hard time getting motivated to tackle it, because I dread dealing with the tooling and dependency management of a 20th century language from academia.)
https://nim-lang.org
https://nim-lang.org
Yes, OCaml would be a decent language to look into. Or perhaps even OxCaml. The folks over at Jane Street have put a lot of effort into tooling recently.
If you want something with minimal startup times then you need a language that complies to native binaries like Zig, Rust or OCaml.
1. You can import by relative file path. (Python can't.)
2. You can specify third party dependencies in a single file script and have that work properly with IDEs.
Deno is the best option I've found that has both of those and is statically typed.
I'm hoping Rust will eventually too but it's going to be at least a year or two.
As an example, almost everyone I’ve worked with in my career likes using macOS and Linux. But there are entire software engineering sub communities who stick to windows. For them, macOS is a quaint toy.
If you’ve never met or worked with people who care about typing, I think that says more about your workplace and coworkers than anything. I’ve worked with plenty of engineers who consider dynamic typing to be abhorrent. Especially at places like FAANG.
Long before typescript, before nodejs, before even “JavaScript the good parts”, Google wrote their own JavaScript compiler called Closure. The compiler is written in Java. It could do many things - but as far as I can tell, the main purpose of the compiler was to add types to JavaScript. Why? Because googlers would rather write a compiler from scratch than use a dynamically typed language. I know it was used to make the the early versions of Gmail. It may still be in use to this day.
And part of those who still complain are momentarily stuck with it.
Just like survivorship bias. It's productive to ponder on the issues experienced by those who never returned.
Python has multiple excellent options for this: JAX, Pytorch, Tensorflow, autograd, etc. Each of these libraries excels for different use cases.
I also believe these are cases where Python the language is part of the reason these libraries exist (whereas, to your point, for the matrix operations pretty much any language could implement these C wrappers). Python does make it easy to perform meta-programming and is very flexible when you need to manipulate the language itself.
If ML fulfills its promise, it won't matter in the least what language the code is/was written in.
If it doesn't, it won't matter anyway.
> The main reasons being it is slow, <snip>, and it's hard to distribute.
Don't forget that Python consumes approximately 70x more power when compared to C.
Speed is the least concern because things like numpy are written in C and the overhead you pay for is in the glue code and ffi. The lack of a standard distribution system is a big one. Dynamic typing works well for small programs and teams but does not scale when either dimension is increased.
But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
A similar point was raised in the other python thread on cpython the other day, and I’m not sure I agree. For sure, it is far from trivial. However, GraalVM has shown us how it can be done for Java with generics. Highover, take the app, compile and run it. The compilation takes care of any literal use of Generics, running the app takes care of initialising classes and memory, instrumentation during runtime can be added to add runtime invocations of generics otherwise missed. Obviously, this takes a lot of details getting it right for it to work. But it can be done.
Their main criticisms of Python were:
> it is slow, its type system is significantly harder to use than other languages, and it's hard to distribute
Your comment would have been more useful if it had discussed how FastAPI addresses these issues.
This phrasing sounds contradictory to me. The whole idea of scripts is that there's nothing to install (besides one standard interpreter). You just run them.
This notion is still strange to me. Just... incompatible with how I understand the term "script", I guess.
One of my biggest problems with python happens to be caused by the fact that a lot of freecad is written in python, and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore ) and the env variable that is supposed to disable that STUPID behavior has no effect because freecad is an appimage and my env variable is not propagating to the environment set up by freecad for itself.
That is me "trying to install other people's scripts" the other people's script is just a little old thing called FreeCAD, no big.
What I don't understand is why you call it a "script".
> and python3 writes _pycache_ directories everywhere a script executes (which means everywhere, including all over the inside of all my git repos, so I have to add _pycache_ to all the .gitignore )
You're expected to do that anyway; it's part of the standard "Python project" .gitignore files offered by many sources (including GitHub).
But you mean that the repo contains plugins that FreeCAD will import? Because otherwise I can't fathom why it's executing .py files that are within your repo.
Anyway, this seems like a very tangential rant. And this is essentially the same thing as Java producing .class files; I can't say I run into a lot of people who are this bothered by it.
it's not as portable
pipx can run Python scripts with inline script metadata. pipx is implemented in Python and packaged by Linux distributions, Free/Net/OpenBSD, Homebrew, MacPorts, and Scoop (Windows): https://repology.org/project/pipx/versions.
But a script only has one shebang.
I see it has been proposed: https://discuss.python.org/t/standardized-shebang-for-pep-72....
A lot of people seem to describe a PEP 723 use case where the recipient maybe doesn't even know what Python is (or how to check for a compatible version), but could be instructed to install uv and then copy and run the script. This idea would definitely add friction to that use case. But I think in those cases you really want to package a standalone (using PyInstaller, pex, Briefcase or any of countless other options) anyway.
You could also use shell scripting or Python or another scripting language. While Python is not great at backward compatibility most scripts will have very few issues. Shell scripts are backward compatible as are many other scripting languages are very backward compatible (e.g. TCL) and they areG more likely to be preinstalled. If you are installing Go you could just install uv and use Python.
The article does say "I started this post out mostly trolling" which is part of it, but mostly the motivation would be that you have a strong preference for Go.
If you care about anyone but yourself, don't write things in python for other people to distribute, install, integrate, run, live with.
If you don't care about anyone else, enjoy python.
When you know well the language, you dont need to search for this info for basic types, because you remember them.
But that's also true for typed languages.
> The price of convenience is difficulties to scale
Of course, they never scale. The moment you start thinking about scaling, you should stop writing code as throwaway scripts but build them properly. That's not an argument to completely get rid of Python or bash. The cost of converting Python code to Go is near zero these days if there is a need to do so. Enough has been said about premature optimization.
> Anyone who's ever tried to get python working on different systems knows what a steep annoying curve it is.
If you need 10 libraries of certain versions to run a few lines of Python code, nobody calls that a script anymore. It becomes a proper project that requires proper package management, just like Go.
"You'd rather drive a compact car than an SUV? Might as well drive a motorcycle then!"
Perl is right there, requires no installation, and is on practically every unix-like under the sun. Sure it's not a good language, or performant, or easy to extend, but neither is python, so who cares. And, if anything, it's a bit more expressive and compact than python, maybe to a fault.
Specifically the problem here is automated reformatting. Gopls typically does this on save as you are editing, but it is good practice for your CI system to enforce the invariant that all merged *.go files are canonically formatted. This ensures that the user who makes a change formats it (and is blamed for that line), instead of the hapless next person to touch some other spot in that file. It also reduces merge conflicts.
But there's a second big (bigger) problem with this approach: you can't use a go.mod file in a one-off script, and that means you can't specify versions of your dependencies, which undermines the appeal to compatibility that motivated your post:
> The primary benefit of go-scripting is [...] and compatibility guarantees. While most languages aims to be backwards compatible, go has this a core feature. The "go-scripts" you write will not stop working as long as you use go version 1.*, which is perfect for a corporate environment.
> In addition to this, the compatibility guarantees makes it much easier to share "scripts". As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future.
Get started here: https://dev.to/yawaramin/practical-ocaml-314j
That said, we can abuse the same trick for any languages that treats `//` as comment.
List of some practical(?) languages: C/C++, Java, JavaScript, Rust, Swift, Kotlin, ObjC, D, F#, GLSL/HLSL, Groovy
Personally, among those languages, GLSL sounds most interesting. A single-GLSL graphics demo is always inspiring. (Something like https://www.shadertoy.com/ )
Also, let’s not forget that we can do something similar using block comment(`/* … */`). An example in C:
/*/../usr/bin/env gcc "$0" "$@"; ./a.out; rm -vf a.out; exit; */
#include <stdio.h>
int main() { printf("Hello World!\n"); return 0; }
I think it’s uv’s equivalent, but for Swift.
(Also Swift specifically supports an actual shebang for Swift scripts.)
[1] https://github.com/xcode-actions/swift-sh
That being said...use Go for scripting. It's fantastic. If you don't need any third party libraries this approach seems really clean.
I make computers do things, but I never act like my stuff is the only stuff that makes things happen. There is a huge software stack of which my work is just the final pieces.
The term “full stack” works fine within its usual context, but when viewed more broadly, it becomes misleading and, in my opinion, problematic.
And it's okay. It doesn't mean it should be this way for everyone else.
It is pretty common (and been so for at least two decades) for web devs to differentiate like so: backend, frontend or both. This "both" part almost always is replaced by "full stack".
When people say this they just mean they do both parts of a web app and have no ill will or neglect towards systems programmers or engineers working on a power plant.
But it is already established in the industry, and fighting it is unlikely to yield any positive outcomes.
[1]: https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...
I think Java can run uncompiled text scripts now too
I think arg0 was always useful especially when developing multifunctional apps like busybox that changes its behavior depending on the name it was executed as.
Additionally, it is even more powerful when used with go modules. Make every script call a single function in the shared “scripts” module and they will all be callable from anywhere symmetrically. This will ensure all scripts build even if they aren’t run all the time. It also means any script can call scripts.DeployService(…) and they don’t care what dir they are in, or who calls it. The arguments make it clear what paths/configuration is needed for each script.
no need to name your program foo.go when you could just name it foo
Something like //usr/bin/gcc -o main "$0"; ./main "$@"; exit
The main reason was to do all this without any dependencies beyond a C compiler and some POSIX standard library.
Oh come on, it's easy:
Does the project have a setup.py? if so, first run several other commands before you can run it. python -m venv .venv && source .venv/bin/activate && pip install -e .
else does it have a requirements.txt? if so python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run ...
else does it have a pipfile? pipenv install and then prefix all commands with pipenv run ...
else does it have an environment.yml? if so conda env create -f environment.yml and then look inside the file and conda activate <environment_name>
else does it have a uv.ock? then uv sync (or uv pip install -e .) and then prefix commands with uv run.
If you've checked out a repo or unpacked a tarball without documentation, sure.
If you got it from PyPI or the documentation indicates you can do so, then you just use your tooling of choice.
Also, the pip+venv approach works fine with pyproject.toml, which was designed for interoperability. Poetry is oriented towards your own development, not working with someone else's project.
Speaking of which, a project that has a pipfile, environment.yml, uv.lock etc. and doesn't have pyproject.toml is not being seriously distributed. If this is something internal to your team, then you should already know what to do anyway.
Acting as if these projects using whatever custom tool (and its associated config, by which the tool can be inferred), where that tool often isn't even advertised as an end-user package installer, are legitimate distributions is dishonest; and acting as if it reflects poorly on Python that this is possible, far more so. Nothing prevents anyone from creating a competitor to npm or Cargo etc.
1. https://mise.jdx.dev/lang/python.html
via https://gelinjo.hashnode.dev/you-dont-need-nvm-sdkman-pyenv-...
I just want to see the full script where I execute it.
I respectfully disagree with this sentiment. JS is a fantastic Python replacement for scripts. Node.js has added all kinds of utility functions that help you write scripts without needing external dependencies. Bun, Deno, and Node.js can execute TS files (if you want to bring types into the mix). All 3 runtimes are sufficiently performant. If you do end up needing external dependencies, they're only a package.json away. I write all my scripts in JS files these days.
Look no further than babashka! It’s a clojure interpreter that has first class support scripting stuff. Great built in libs for shelling out to other programs, file management, anything http related (client and server), parsing, html building, etc.
Babashka is my go-to tool for starting all new projects now. It has mostly everything you need. And if it’s missing anything, it has some of the most interesting and flexible dependency management of any runtime I’ve ever seen. Via the “pod protocol” any process (written in go/rust/java whatever) can be exposed as a babashka dependency and bundled straight in. And no separate “install dependencies” command is required, it installs and caches things as needed.
And of course you keep all of the magic of REPL based development. It’s got built in nrepl support, so just by adding on ‘—nrepl-server 7888’ to your command, you can connect to it from your editor and edit the process live. I’m building my personal site this way and it’s just SO nice.
Sorry for the rant but when superior scripting solutions come up, I have to spread the love for bb. It’s too good to not talk about!!
> How true this is, is a topic I dare not enter.
augroup fix autocmd! autocmd BufWritePost *.go \ if getline(1) =~# '^// usr/bin/' \ | call setline(1, substitute(getline(1), '^// ', '//', '')) \ | silent! write \ | endif augroup END
https://www.jbang.dev/
Example:
The shebang line can be replaced for compatibility with standard Go tooling:Awesome!
I think this points to some shortcomings of the shebang mechanism itself: That it expects the shebang line to be present and adhering a specific structure - but then passes the entire file with the line to the interpreter where the interpreter has to process (and hopefully ignore) the line again.
I know that situations where one piece of text is parsed by multiple different systems are intellectually interesting and give lots of opportunities for cleverness - but I think the straightforward solution would be to avoid such situations.
So maybe the linux devs should consider adding a new form for the shebang where the first line is just stripped before passing the file contents to the interpreter.
The only way to "pass the file contents" would be through the standard input stream, but the script might want to use stdin like normal, so this isn't an option.
I struggle to think of how the answers provided here could be clearer or more satisfactory. Why write an article if you're going to half-ass your research? Why even mention this nothingburger sidetrack at all...? (Bait?)
[0] https://stackoverflow.com/questions/24678056/linux-exec-func...
> go run does not properly return the script error code back to the operating system and this is important for scripts, because error codes are one of the most common ways multiple scripts interact with each other and the operating system environment.
Or the venerable https://babashka.org/
It’s great for “robust” code, not for quick things that you’re okay with exploding in the default way.
So your goal was to waste your reader's time. Thanks.
This is something I generally believe, but I think it's particularly important for things like languages and runtimes: the idea of installing things "on" the OS or the system needs to die.
Per-workspace or per-package environment the way Go, Rust, etc. does it is correct. Installing packages globally is wrong.
There should not be such a thing as "globally." Ideally the global OS should be immutable or nearly so, with the only exception being maybe hardware driver stuff.
(Yes I know there's stuff like conda, but that's yet another thing to fix a fundamentally broken paradigm.)
Python has been trying to kill it for years; or at least, the Linux distros have been seeking Python's help in killing it on Linux for years. https://peps.python.org/pep-0668/ is the latest piece of this.
The use of the system as a workspace goes back to when computers were either very small and always personal only to one user, or when they were very big and administrated by dedicated system administrators who were the only ones with permission to install things. Both these conditions are obsolete.
The compilation command returned immediately, and I thought it had failed. So I tried again and same result. WTF? I thought to myself. Till I did an `ls` and saw an `a.out` sitting in the directory. I was blown away by how fast the golang compiler was.
At least it seems important on NixOS, I had to rewrite a few shebangs on some scripts that used /bin/bash and didn't work on NixOS.
I feel like this is the unofficial Go motto, and it almost always ends up being a terrible idea.
You can abuse the falsity of None to do things like `var or ""`, but this ground gets quite shaky when real bools get involved.
Somehow I have been doing so since +25 years. Never regretted it. Never really needed shell either. (Ok, that's not entirely true; I refer to shell scripts. I do use bash as my primary shell, largely due to simplicity; I don't use shell scripts though, save for keeping a few legacy ones should I be at a computer that has no support for ruby, python or perl. But this is super-rare nowadays.)