Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment, please tell us at hn@ycombinator.com so we can add it to this list.

I once bought a far larger supercomputer. It was 1/8 (roughly) of ASCI Blue Mountain. 72 racks. Commissioned in 1998 as #1 or #2 on the TOP500, officially decommissioned in 2004, purchased my 1/8 for $7k in ~2005.

Moving 72 racks was NOT easy. After paying substantial storage fees, I rented a 1500sf warehouse after selling off a few of them and they filled it up. Took a while to get 220V/30A service in there to run just one of them for testing purposes. Installing IRIX was 10x worse than any other OS. Imagine 8 CD's and you had to put them each in 2x during the process. Luckily somebody listed a set on eBay. SGI was either already defunct or just very unfriendly to second hand owners like myself.

The racks ran SGI Origin 2000s with CRAYlink interlinks. Sold 'em off 1-8 at a time, mainly to render farms. Toy Story had been made on similar hardware. The original NFL broadcasts with that magic yellow first down line were synthesized with similar hardware. One customer did the opening credits for a movie with one of my units.

I remember still having half of them around when Bitcoin first came out. It never occurred to me to try to mine with them, though I suspect if I'd been able to provide sufficient electrical service for the remainder, Satoshi and I would've been neck-and-neck for number of bitcoins in our respective wallets.

The whole exercise was probably worthwhile. I learned a lot, even if it does feel like seven lifetimes ago.


Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.


I would say that there is very little danger of a proof in Lean being incorrect.

There is a serious danger, which has nothing to do with bugs in Lean, which is a known problem for software verification and also applies in math: one must read the conclusions carefully to make sure that the right thing is actually proved.

I read Wilshaw's final conclusions carefully, and she did indeed prove what needed to be proved.


At Software Arts I wrote or worked on the IL interpreter for the TRS 80 Model III, the DEC Rainbow, the Vector Graphic, the beginnings of the Apple Lisa port, as well as the IBM PC port. To put you into the state of mind at the time,

- in the pre-PC era, the microcomputer ecosystem was extremely fragmented in terms of architectures, CPUs, and OS's. 6502, z80, 68K, z8000, 8088. DOS, CPM, CPM/86, etc. Our publisher (Personal Software) wanted as much breadth of coverage, as you might imagine

- one strong positive benefit of porting from 6502 assembly to IL and using an interpreter was that it enabled the core code to remain the same while leaving the complex work of paging and/or memory mapping to the interpreter, enabling access to 'extended memory' without touching or needing to re-test the core VisiCalc code. Same goes for display architectures, printer support, file system I/O, etc.

- another strong benefit was the fact that, as the author alludes to, the company was trying to transition to being more than a one hit wonder by creating a symbolic equation solver app - TK!Solver - that shared the interpreter.

Of course, the unavoidable result is that the interpreter - without modern affordances such as JIT compilation - was far less snappy than native code. We optimized the hell out of it and it wasn't unusable, but it did feel laggy.

Fast forward to when I left SoftArts and went across the street to work for my friend Jon Sachs who had just co-founded Lotus with Mitch Kapor. Mitch & Jon bet 100% that the PC would reset the ecosystem, and that the diversity of microcomputers would vanish.

Jon single-handedly wrote 1-2-3 in hand-tuned assembly language. Yes, 1-2-3 was all about creating a killer app out of 1.spreadsheet+2.graphics+3.database. That was all Mitch. But, equally, a killer aspect of 1-2-3 was SPEED. It was mind-blowing. And this was all Jon. Jon's philosophy was that there is no 'killer feature' that was more important than speed.

When things are moving fast and the industry is taking shape, you make the best decisions you can given hunches about the opportunities you spot, and the lay of the technical and market landscape at that moment. You need to make many key technical and business decisions in almost an instant, and in many ways that determines your fate.

Even in retrospect, I think the IL port was the right decision by Dan & Bob given the microcomputing ecosystem at the time. But obviously Mitch & Jon also made the right decision for their own time - just a matter of months later. All of them changed the world.


Back in 1999-2000 there was an "International RoShamBo Programming Competition" [1] where computer bots competed in the game of rock-paper-scissors. The baseline bot participant just selected its play randomly, which is a theoretically unbeatable strategy. One joke entry to the competition was carefully designed to beat the random baseline ... by reversing the state of the random number generator and then predicting with 100% accuracy what the random player would play.

Edit: the random-reversing bot was "Nostradamus" by Tim Dierks, which was declared the winner of the "supermodified" class of programs in the First International RoShamBo Programming Competition. [2]

[1] https://web.archive.org/web/20180719050311/http://webdocs.cs...

[2] https://groups.google.com/g/comp.ai.games/c/qvJqOLOg-oc


We (Nepali) have been using this material to make lokta paper for a long time now. These papers (Nepali Kaagaz) are used mainly today for official documents.

Funny story. Before pivoting my startup to Loom, we were a user testing company named Opentest. Instead of spinning up a DB and creating a dashboard for my co-founders to look at who requested certain user tests, I just dumped everything into a Google Sheet. It was so good. No downtime. Open access. Only 3 people looking/editing, so no conflict. Didn't have to deal with database upgrades or maintenance. I often think about this decision and feel like I've learned a bunch of "good engineering practices" that pale in comparison to how being truly scrappy can be a genius unlock at any level.

My friend Christian Metts and I, 15 and 14 at the time, decided to start a virtual reality arcade in Jackson Crossing mall in Jackson Michigan in 1996.

We wrote a business plan, got $10k in investment from friends and family, rented space in the mall, bought the best PC at the time, two VFX1 3d headsets, licensed Descent for our use case by snail mailing the company with our idea and receiving a contract back from them which we signed and sent with a check, and my dad helped us design, build and weld a custom desk with arms to hold the headsets when they weren’t in use. It was designed as a standing desk so you could just walk up underneath, reach up and put the headset on, and play. I think we had Nintendo style controllers for the hands.

Unfortunately we were terrible salesmen (sales boys?) and then the Quake demo came out and we just played that non stop for 2 months without charging for it because we’d run out of budget for licensing.

Thankfully by August we were able to get construction jobs and managed to pay off our loans a few months ahead of the original plan (I think the terms were 10% over 12 months).

Great game, great learning experience, and a lot of fun. Haven’t failed at a business venture since. Ended up being an entrepreneur for the next 14 years before taking a more standard day job as a software engineer.


Wow, that's the original release I did ages ago, with my notes and everything! I spent a few weekends extracting proprietary code/libraries from the code base so it could be released. Doing that work changed the direction of my life in a way, leading me from hobby game development work to working on Descent 3.

Very beautiful research and thorough documentation. I initially wanted to comment that this looks a lot like time-domain reflectometry on a conceptual level - but as Cindy Harnett seems to be your advisor, you probably know that already :)

Very cool! We're actually doing something quite similar (although on a much smaller scale) at the TU Berlin: https://www.tu.berlin/en/about/profile/press-releases-news/n...

Just recently, we completed a series of correction burns to match the semi-major axis of the orbit of both satellites to stop them drifting away from each other. In a few weeks/months we'll bring them back together and send a software update that will allow them to autonomously maintain formation flight.

Also, we will be doing live satellite operations at the Lange Nacht der Wissenschaften (22. June 2024) :)


I'm at Sourcegraph (mentioned in the blog post). We obviously have to deal with massive scale, but for anyone starting out adding code search to their product, I'd recommend not starting with an index and just doing on-the-fly searching until that does not scale. It actually will scale well for longer than you think if you just need to find the first N matches (because that result buffer can be filled without needing to search everything exhaustively). Happy to chat with anyone who's building this kind of thing, including with folks at Val Town, which is awesome.

Back in my aerospace days I worked on an obscure secure operating system, which, unfortunately, was built for the PDP-11 just as the PDP-11 neared end of life. This was when NSA was getting interested in computer security. NSA tried applying the same criteria to computer security they applied to safes and filing cabinets for classified documents. A red team tried to break in. If they succeeded, the vendor got a list of the problems found, and one more chance for an evaluation. On the second time around, if a break in succeeded, the product was rejected.

Vendors screamed. Loudly. Loudly enough that the evaluation process was moved out of NSA and weakened. It was outsourced to approved commercial labs, and the vendor could keep trying over and over until they passed the test, or wore down the red team. Standards were weakened. There were vendor demand that the highest security levels (including verification down to the hardware level) not even be listed, because they made vendors look bad.

A few systems did pass the NSA tests, but they were obscure and mostly from minor vendors. Honeywell and Prime managed to get systems approved. (It was, for a long time, a joke that the Pentagon's MULTICS system had the budgets of all three services, isolated well enough that they couldn't see each other's budget, but the office of the Secretary of Defense could see all of them.)

What really killed this was that in 1980, DoD was the dominant buyer of computers, and by 1990, the industry was way beyond that.


Whenever this topic comes up there are always comments saying that SGI was taken by surprise by cheap hardware and if only they had seen it coming they could have prepared for it and managed it.

I was there around 97 (?) and remember everyone in the company being asked to read the book "The Innovator's Dilemma", which described exactly this situation - a high end company being overtaken by worse but cheaper competitors that improved year by year until they take the entire market. The point being that the company was extremely aware of what was happening. It was not taken by surprise. But in spite of that, it was still unable to respond.


Earthquake waves have several propagation speeds, because there are different types of waves. The fastest is called the P-wave, which is a compressional (longitudinal) wave, similar to a sound wave, with a velocity of ~5-8 km/s for typical continental bedrock. The second fastest is the S-wave, or shear wave, which is about 65% of the P-wave speed. These waves produce relatively little displacement at the surface (except for close to the epicenter of large earthquakes) but are important seismologically. Then, there are the surface waves, which are caused by the interaction of the S-waves with the surface (in a way that I don't 100% understand). These travel about 90% of the S-wave speed, but they have the biggest displacements at the surface and therefore are the main ones that you feel and that cause damage.

The surface wave displacements also get amplified in wet or loose soil, so the ground shaking and seismic damage is also much greater areas on top of sediment rather than bedrock. Areas on a river, lake or coast where the land has been extended into the water by dumping fill dirt are the worst--ground shaking is really bad and they are very prone to liquefaction.

The difference between the arrival times (at any given point on earth) of the different phases of seismic waves is a function of the distance from the earthquake itself (the hypocenter) and the observation site. It is close to linear in Euclidian distance relatively near the earthquake hypocenter, but becomes more nonlinear farther from the earthquake, because the wave speeds are faster at depth (denser rock) so the travel paths of the wave fronts (the ray paths) are nonlinear. These differences in arrival times are one of the main ways of locating the hypocenter of an earthquake given observations from seismometers at multiple sites. It's essentially triangulation, except with time differences instead of angles--this is done through solving a system of equations.

Additionally, S-waves can't pass through liquids, so there is the 'S-wave shadow zone' that occupies a large fraction of the side of the earth opposite an earthquake where there are no primary S-wave arrivals--S-waves are blocked by the liquid outer core. This is how we found out that the outer core is liquid!


My dad worked at SRI for over thirty years and my mom also worked there. Money has always been an issue at SRI. You always had to be on the lookout for the next contract. If some company or part of the government wasn't paying for your work, there was always the chance that you would get laid off. On the other hand, my dad got to work on a lot of different projects over the years, from growing silicon crystals, to working on holograms, laser range finders, and a laser chemical weapons detector (deployed during the Iraq war), something called the Spindt cathode, which I honestly don't understand, LED printing, and many other projects. I think it was a very fun place to work, but also quite stressful. You always needed to be ready to switch to something new if the money started running out. It doesn't sound all that different from the way it is today.

The employee open house was really neat, with different labs showing off whatever they were working on, from early noise canceling tech, to computers with color screens, cell counters, you name it. I know we visited "Doug's Lab" but I have no idea what we saw there. As any aspiring nerd, I was quite impressed that my dad and him were on a first name basis.


Oh boy, this gives me a chance to talk about one of the gems of astronomy software which deserves to be better known: HEALPixel tesselation!

HEALPixels stand for 'Hierarchical Equal-Area Iso-latitudinal Pixels'. It is a scheme that was developed to analyze signals that cover the entire sky, but with variable density.

Like HTM or Hilbert curves, this can be used to organize spatial data.

The tesselation looks kind of funny but has many good features - it doesn't have discontinuities at poles, and is always equal area. And with the "nested" healpixel formulation, pixels are identified by integers. Pixel IDs are hierarchical based on leading bits - so, for example, pixel 106 (=0110 1010) contains pixel 1709 (=0110 1010 1101). This lets you do some marvelous optimizations in queries if you structure your data appropriately. Nearest neighbor searches can be extremely quick if things are HEALPix-indexed - and so can radius searches, and arbitrary polygon searches.

HEALPixels are used today for more than just their original intent. LSST will use them for storing all-sky data and point source catalogs, for example.

More here:

- Original NASA/JPL site: https://healpix.jpl.nasa.gov/

- Popular Python implementation: https://healpy.readthedocs.io/en/latest/

- Good PDF primer: https://healpix.jpl.nasa.gov/pdf/intro.pdf

And an experimental database being built on healpix for extremely large data volumes (certainly many TB, maybe single-digit PB): https://github.com/astronomy-commons/hipscat


The minimum puzzle length for spelling bee is 20 words iirc. The dictionary is also a highly curated list of “common” words. What constitutes a valid word is up to Sam, the NYT editor. It’s designed to make the puzzles doable by the average solver. You’ll notice that a lot of the words in the OP are very esoteric.

Source: helped build SB at NYT.


He thinks it's really amazingly cool! :D

I'm so happy to hear that I had some part to play in inspiring such a marvellous project.


Author here. Yes, Jeremy Howard and fast.ai was one of the inspirations for this! I'd actually be curious what he thinks of the project if he ever sees it.

> As others have noted, exponential smoothing has a different problem, that it asymptotically approaches but never quite reaches its destination. The obvious fix is to stop animating when the step gets below some threshold, but that's inelegant.

This is off the cuff, but you might be able to fix this as follows: Interpret exponential smoothing as a ODE on the distance to the target. Call that distance D. Then exponential smoothing is the Euler update for dD/dt=-C*D. (the constant C>0 being a speed parameter) The issue you bring up is basically the fact that the solutions to the ODE are D(t)=A*exp(-C*t), which is asymptotic to zero as t->oo, but never reaches zero. Now, the fix is to replace the ODE with one that goes to zero in finite time. e.g. dD/dt=-C*sqrt(D). (Solutions are half-quadratic. i.e. they are quadratic for a bit then stay zero once you hit zero.) The Euler update for this is stateless like you wanted.


Huh, then you're one of today's lucky ten thousand!

Apollo 14 had a piece of loose solder in the button triggering abort-to-orbit, so it occassionally triggered itself. This wasn't a problem en route to the moon, but the second the descent phase started it would have been a Poisson-timed bomb that would prevent the landing.

There was a bit of memory that could be set to ignore the state of the abort button (this bit was the reason the abort sequence wasn't triggered en route). The problem was this ignore bit was reset by the landing sequence (to allow aborting once landing started), and they did not believe the astronauts would be quick enough to set the bit again before the button shorted out and triggered the abort.

(Ignoring the abort button was fine because an abort could be triggered in the computer instead. Takes a little longer but was determined a better option than scrapping the mission.)

Don Eyles came up with a clever hack. Setting the program state to 71 ("abort in progress") happened to both allow descent to start and prevented the abort button from being effective. So this program state was keyed in just before descent.

The drawback was that it obviously put the computer in an invalid state so some things were not scheduled correctly but Eyles and colleages had figured out which things and the astronauts could start those processes manually.

Then once the computer was in a reasonable state again the ignore abort bit could be set and the program mode set correctly and it was as if nothing had happened.


> I ultimately tricked it by inserting a real 27C322 first and reading that before swapping over to the chip I actually wanted to read. Once the reader’s recognized at least one chip, it seems happy to stick in 27C322 mode persistently.

My people. I only aspire to be this damn clever. It's why I surround myself by people smarter than me.


Lookup tables are the only reason I was able to get this effect working at all:

https://twitter.com/zeta0134/status/1756988843851383181

The clever bit is that it's two lookup tables here: the big one stores the lighting details for a circle with a configurable radius around the player (that's one full lookup table per radius), but the second one is a pseudo-random ordering for the background rows. I only have time to actually update 1/20th of the screen each time the torchlight routine is called, but by randomizing the order a little bit, I can give it a sortof "soft edge" and hide the raster scan that you'd otherwise see. I use a table because the random order is a grab bag (to ensure rows aren't starved of updates) and that bit is too slow to calculate in realtime.


Recall a project back in the days where the customer wanted to upgrade their workstations but also save money, so we designed a solution where they'd have a beefy NT4-based Citrix server and reusing their 486 desktop machines by running the RDP client on Windows 3.11.

To make deployment easy and smooth, it was decided to use network booting and running Windows from a RAM disk.

The machines had 8MB of memory and it was found we needed 4MB for Windows to be happy, so we had a 4MB RAM disk to squeeze everything into. A colleague spent a lot of time slimming Windows down, but got stuck on the printer drivers. They had 4-5 different HP printers which required different drivers, and including all of them took way too much space.

He came to me and asked if I had some solution, and after some back and forth we found that we could reliably detect which printer was connected by scanning for various strings in the BIOS memory area. While not hired as a programmer, I had several years experience by then, so I whipped up a tiny executable which scanned the BIOS for a given string. He then used that in the autoexec.bat file to selectively copy the correct printer driver.

Project rolled out to thousands of users across several hundred locations (one server per location) without a hitch, and worked quite well from what I recall.


The projection system, you can see in a photo, was made by Vitarama. Vitarama developed a number of interesting projection systems, but their best known was Cinerama, the three-camera widescreen format that was briefly popular in the '50s but had long-lasting influence by popularizing widescreen films. One wonders if Fred Waller, Cinerama's inventor, worked on this project. He was sort of an inventor of the classic type.

In another photo we see what I think is a Teletype Model 15, behind the clerk handling an impressive rolodex. It even appears to have a Bell Canada Trans-Canada Telephone System badge on it. Transmitting orders was a very popular application of teletype service, and a lot of Bell advertising in both the US and Canada focused on customers like Hudson's Bay and Montgomery Ward.


When I was at FastMail I did a lot of very manual work to not just block spammers and other abusers, but to make their life as difficult as possible. That included figuring out how to notify the people running the servers they used (including sometimes finding the IRC chat for the folks on that server and telling them they had an intruder). One of my favorite things was to redirect bounce messages that were targeted at innocent FastMail customers to the actual spammer's email address -- which I found stopped the spam from them very quickly, once their inbox filled up with thousands of bounce messages!

Personally, I think it's reasonable to care about such things, and to try to do something about it. If no-one cares or tries, then sucky people will just suck even more.


Around 2003 I did the art direction (mostly pixel-pushing...) for a game that shipped on a Nokia model. I have no recollection of what the phone looked like, but it was part of the "lifestyle" category described in this article. It wasn't one of the craziest form factors, just a candybar phone in pretty plastic with one of those early square color screens.

Nokia Design sent a massive moodboard PDF, something like 100 pages, with endless visual ideas for what seemed practically like an Autumn/Winter lineup of plastic gadgets. But it was all about the moods. The actual phone's usability and software were a complete afterthought. Those were to be plugged in eventually by lowly engineers somewhere along the line, using whatever hardware and software combination would happen to fit the bill of materials for this lifestyle object.

The game I designed was a "New York in Autumn" themed pinball. There were pictures of cappuccino, a couple walking in the park, and all the other clichés. It fit the moodboard exactly, the game shipped on the device, everyone was happy. Nobody at Nokia seemed to care about the actual game though.

Of course the implication with these fashion devices was that they were almost disposable, and you'd buy a new one for the next season. This would be great for Nokia's business. Unfortunately their design department seemed consumed by becoming a fashion brand and forgot that they're still a technology company. Everyone knows what happened next.


I once commented that HN is the most wonderfully diverse ecosystem and here's my chance to prove myself right! I'm a cork 'farmer' in Coruche, right where this article is situated. I wasn't expecting to read a puff piece about it today. I just did my novennial harvest last year. For anyone not in the know, cork is the cork trees' bark, and it's stripped from the tree without harming it every nine years. Undressing the tree is properly medieval work and you need to be very skilled with a hatchet to do it. Do a poor job and you'll ruin the cork and scar the tree for decades.

The harvest is tough work but it's the only well-paid trade left in agriculture. I doubt it has much future beyond fodder for high peasant magazine articles. Trees are dying left and right from multiple climate-related problems no one has a handle on. Divestment from the traditional montado like mine into intensive production units with better water management and automated extraction is the likely future. The billion-dollar outfits have started experiments with high-density groves, inspired by the olive oil industry's success. It's a finicky tree though, so conclusive results are taking a few decades more than you'd expect to materialise. They're stuck having to buy cork from thousands of traditionalist family farms for now.

But that's assuming the industry even grows enough to justify the investment into better plantations. Legitimate uses for the stuff apart from wine corks are scarce. We're all hoping that our phenomenal ecological footprint will see us grow as an industry into everything from insulation and roofing to shopping bags and umbrellas (hence said puff piece I imagine). We'll see, it really is a phenomenal material and the carbon math makes sense at the source. You can almost see the tree sucking out stuff from the air and soil to build thicker layers of bark. I joke that we've been doing regenerative farming for generations, we just didn't know it until someone told us.

If anyone on HN is ever in Portugal and wants to visit a montado, happy to take y'all on the most boring tour of your life. But we can have a nice picnic! It's lovely country.


Back when my sole internet experience was playing (losing) every match on Chess.com as a "volunteer librarian", I'd often inject awkwardly escaped characters, closing tags, common quirky control strings, and even OLE objects into the live Chess.com games.

Eric (founder) had politely asked me for a more formal audit (to which I declined, not wanting to out myself as an 11 year old script kiddie) but I did explain the RegExp needed for the chat room censor and we tackled the ultimate problem; how to detect cheaters in asynchronous environments.

After consideration I informed him the only way to possibly detect cheaters is to compare every (game-significant/high-mu) move made against the known optimal moves from engines, and use statistical inference to discriminate good humans from cheaters.

Of course, at the time, this was laughably unfeasible - which was the answer we had concluded on. But for a barely out of elementary kid to discuss those kinda nuances with a legit webmaster (Hello Eric!), it is one of my more favorable internet memories.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: