@gandalf sent a private message
@gandalf sent a private message
@gandalf sent a private message
@gandalf in #sanfrancisco
Re: %n/Ij20l1f

Yes, I agree. Oakland has become a great deal more expensive.

It's kind of a one-way curse. The hardest part is moving in and getting stable. Lots of folks do one, but not the other. All the numbers are scaled up from Standard America, and that makes a tall hurdle, a bit of a Las Vegas bet. But moving out from a stable base can mean a substantial "upgrade", at least in material terms, if you have the discipline to save at Bay Area clip, and "scale down" to saner surrounds. There's a quality of life here like nowhere else, but not everyone values that particular quality so high, as high as the cost.

The same pattern holds for the Bay Area generally. Many of my SF-local friends move from SF to Oakland, or from Oakland to El Cerrito etc., to move from apartment to house, or lease to mortgage. Some are waiting for the market to turn around, biding time in East Bay until they can afford to buy, back where they grew up. Waiting for the earth to shake and scare away newcomers, or at least cash buyers.

Some things don't change, even as the rest does. The big Bay cities are still hubs. Air travel to and from is easy near the airports and transit lines. Events are still dense in all the usual places, and BART access is worth a lot, day to day. Structural advantages.

But the jagged, ragged edge of Oakland is starting to roll over. The "art walks" happen nearer the buyers than the sellers these days. Sure, Oakland still has grit, danger, and chaos. It's dysfunctional, as opposed to SF-smug. That's true as ever in relative terms, looking west to SF, but less so in absolute terms.

In the end, you have to make the best, no matter where you are. But the Bay Area is relatively high risk for establishing that base level of stable, from which you can build. Moving in is taking a seat at a gambling table. Moving out is more like cashing in.

@gandalf sent a private message
Re: %yT6UzfLuK


@gandalf in #scuttlebutt
Re: %QqIkHXYur

Thank you both for responding!

@gandalf voted [@kemitchell](@TqlPr31FOOCjuupH0L9HCJdLjWYySyibbKscNhgLoCo=.ed25519) you ar
@gandalf voted @kemitchell Yes, that's correct (as far as I understand it) [Here's one of
@gandalf in #scuttlebutt

Feed Forks?

Do I correctly understand the secure scuttlebutt clients will treat two different signed messages, both linking back to the same prior message, as corrupting the feed from that point forward? Do peers simply reject new messages that link back to messages not currently at their head of the feed?

Put in different ways, starting from consequences and working back:

  • Is it incumbent on peers to make sure that they faultlessly serialize their messages, so the feed only extends, never branches? I take it this practically locks write use of private keys to one process on one device, since copying a key to another device running in parallel.

  • If a peer retains its private key, but loses its feed cache, does it essentially has to guess when it has fully replicated its old messages, so that it can safely start appending again?

Many thanks!

@gandalf subscribed to channel #motorcycle
@gandalf in #travel

Let's get food while you're in town!

@gandalf in #scuttlebutt
Re: %wOsBVN1yW

@Soggypretzels Thanks for the link

@gandalf voted I am not really sure what the status of this is, but it is something that i
@gandalf in #flume
Re: %+1NlsorzV

Thanks for linking!

@gandalf voted standards people are working on a new proposal for block storage: * https:/
@gandalf in #scuttlebutt

Multiple Devices

Please forgive me if I've missed this somewhere in the doc!

What's the currently preferred approach to Scuttlebot on multiple devices? Is there a standard message type I can use to associate one key with another?

@gandalf in #cooperative
Re: %ZdkD1w8sh

@mix: I'm very ill-disposed, by long experience with folk less competent and well meaning, to taste bile at every oblique allusion to methodology prime to fit on a postcard.

As a consequence, I give you less credit than you deserve. I'm working on it.

@gandalf in #cooperative
Re: %ZdkD1w8sh

@substack: "Standard" carries way too much weight in our vocabulary. I should know. You'd search in vain, far and wide, for someone so guilty as I of abusing that quirk ;-P

With projects like Switchmode, I'm trying to give folks structure where enthusiasm currently suffocates in uncertainty. I hope that will create opportunities we'd otherwise lose to opportunity cost. So you're a company that doesn't hire open-source types, because you don't know how to go about it, and they can't tell a contract from a greeting card? Here's a form, ready to use, from a constructive, world-weary, neutral source. The toll across the bridge of uncertainty is paid for you. I paid it. Merry Christmas.

That's very different from telling a company that they will accept X terms, from a source defined by organized opposition to their interests, or face dire consequences. That's no longer a docent's welcome. It's a negotiator's gambit. If you haven't the leverage to keep them dealing at your table---if you can't stop them walking away, and hiring someone who doesn't demand your terms---it's impotent posturing. If you do have the leverage, how they're written, managed, revised, disseminated --- it's all rather beside the point. Anything not totally unmanageable will do. It's simple cost-benefit.

@gandalf in #cooperative
Re: %ZdkD1w8sh

Theorizing this, or getting lost in implementation details---how you structure, legally, or what software you use for contracts---is pointless. Either you'll bring together sufficient people willing to risk their livelihoods, with enough momentum to pull even the indifferent into collective bargaining, or you won't. In other words, you develop leverage, or you don't.

The best-crafted back office structure won't change the leverage. Conversely, if you have the leverage, you can do all the contracts with crayons and construction paper. You can do all the meetings by string-and-can telephone.

@gandalf followed @christianbundy
@gandalf in #npm-ssb
Re: %GN18zeGUp

My experience with SemVer at repository scale comes from work on https://www.npmjs.com/package/flat-dependency-follower. The goal there was to precompute fully resolved, composable, flat dependency trees for every package and version in the public registry.

It is very difficult to wrap one's mind around the enormity of package dependency trees. Even for packages in the public registry, which tend to be libraries with fewer dependencies than fully composed applications. Some packages are remarkably stable, with few new updates over time. But the cascade of a single patch update can be absolutely enormous.

As a package consumer, I've seen that even services like Greenkeeper, when configured only to notify when a newer version out of your specified range becomes available, can quickly overwhelm pull requests on a large project. Triggering CI mitigates the inconvenience. If you're confident in your test suite, you can be much more cavalier about taking new versions, whether they're within the range you specify or outside of it. In practice, I usually create a Git branch, update the tree, and run my test suite whenever I consider a new version. If that fails, I usually pull down the Git repo for the dependency and run git log from old to new release tag. That's a few minutes' to an hour's worth of work.

In the vast majority of cases, patch-level releases of npm packages have worked out fine for me. I can probably count on one hand the number of times a >=1.0.0 patch-release has bitten me with unintended breakage. Given how often patch releases affect my trees, I believe the "error rate"---the number upgrades within specified range that caused breakage---has been very, very small. I wouldn't trade the time I've saved for the comfort of manual verification. Especially when the dependencies being upgraded are dependencies that I wrote and published!

@gandalf followed @andrestaltz
@gandalf followed @Feross
@gandalf connected to a pub
@gandalf followed @sbot.kemitchell.com
Re: %RRqjIzyKS

I am by no means a cultivated Sinologist. But I've read a bit about the period, and I recall many anecdotes about corruption, cheating, and unequal access in the path to civil service. Literacy was not universal to begin with, and the exams were grueling, necessitating a long course of private study, much of it rote. Plus access to the voluminous materials in the first place, and supplies for fancy writing and such, then a privilege of the wealthy.

In other words, much like law school and the bar exams to become an American lawyer these days. Capable of producing a reliable, relatively small crop of celebrated up-from-bootstraps stories, but on the main, an amplifier of preexisting inequality.

I also recall reading that the exams were both oddly specific---a staple was quoting, verbatim, passages of standard texts---while also highly general, in the sense of testing focusing almost exclusively on literary, philosophical, and other "liberal" subject areas, to the exclusion of science or other more practical areas.

If you ever get a chance, there's a lovely Chinese garden in Portland. One of the buildings is a study set up for a student preparing for the exam. A very special place.

I'd so love to visit the gardens in the PRC someday. My old man has been, and the best here in North America only manage to remind him of those visits.

@gandalf in #webassembly
Re: %0vtC1GiAc

@dominic, any assessment of WebAssembly as a "compile" target? It's on my list to look through the doc with that in mind.

@gandalf followed @regular
@gandalf in #programming
Re: %X/qO9+cfe

@mixmix: About the Microsoft Word pit. Not all bad news! Common Form can export:

npm i -g commonform-cli
commonform render --format docx --title "Some Agreement" --number outline --blanks blanks.json --signatures signatures.json some-agreement.cform > some-agreement.docx
@gandalf in #programming
Re: %X/qO9+cfe

@regular, I've used Common Form to make contracts destined for many jurisdictions. That being said, there are a couple of limitations:

  1. Right now, valid Common Form data objects contain only a subset of ASCII characters. No discrimination intended---I love Russian, Spanish, and Unicode punctuation for English texts---but restricting characters is one way to reduce the number of ways to encode the same content. In principle, it wouldn't take much to adapt the schema to a different language, with a different set of allowed characters. But the infrastructure I've set up around commonform.org enforces the original, English-centric schema.

  2. All of Common Form's concepts and features are "low-level", in the sense that they don't try to process or interpret the language in forms, just their structure. Whether legal language works in a particular jurisdiction definitely involves interpreting the language.

The good news is that having a "low-level" data language to encode and identify forms---by hash!---makes layering all kinds of data on top straightforward, including people's interpretations, thoughts, and reactions. I've taken to calling metadata that points at particular Common Form objects "annotations". Programs that write annotations I call "annotators". But people can write annotations, too!

commonform.org already supports freeform text comments from named users, keyed to particular forms by hash, as a kind of annotation. I am very interested in adding UI support for well-structured annotations, as well. There are many jurisdictions within the US---each state has its own laws, much as the EU member states retain national laws in specific areas. I'd eventually like to implemented structured annotations that communicate, say, "Kyle checked this bit of language and thinks it works under California law as of July 6, 2017." or "Juan checked this language and thinks it cannot be enforced under French law as of July 7, 2017.". I can write an unstructured comment like that and save it to commonform.org today. If there were a standard, structured way to express it, we could do search.

@gandalf in #programming
Re: %X/qO9+cfe

@mix: I've used the tools to prepare many, many contracts at this point.

The good news is that it makes it much, much easier to come up with good docs.

The bad news is that almost whenever another lawyer is involved, in a negotiation, I end up getting dragged back into the Microsoft Word Pit.

@gandalf in #cooperative
Re: %T9vafeEAA

Sorry. This is about contracts between coops? So Coop A gets a deal with Client X. then Coop A turns around and subcontracts X work to Coop B?

@gandalf in #programming

Common Form

I'd like to share a project that I've been working on for a few years now: Common Form. Common Form implements a data language, editor, and repository for modular legal contracts. It's a mishmash of of bits and pieces of S-expressions, CodeMirror, jslint, npm, and Git, all reinterpreted for the contract drafter's art.

With Common Form tools, mostly but not exclusively by yours truly, it's currently possible to:

  1. compose legal contracts from reusable parts, with automatic structural checks and consistent, automatic formatting
  2. post forms online anonymously, creating an obscure link to send to colleagues or counterparties
  3. publish posted forms by assigning them a project name and edition, akin to tagging a Git commit for release
  4. browse forms on the web, fill them out, and download to regrettably standard Word format, without sending deal specifics or info about who is signing across the wire
  5. post comments to forms on the web, and subscribe to notifications
  6. automate document generation tasks, with UNIX-y, plain-text tools, to a number of more civilized formats

The crux of Common Form is a schema for legal contracts, and reusable chunks of legal contracts, as JSON objects. The validation rules are implemented here. The rules try to:

  1. leave just one correct way to encode any particular bit of contract language, to facilitate content hashing
  2. clearly encode structural elements of contracts, like definitions, references, blanks, headings, and nested structures
  3. make it fairly easy to encode forms in current use, with minimal adaptation

The result is a very simple, strict, recursive schema. If this project has reinforced any general-sounding wisdom, it's that teeny-tiny looking schema choices have huge consequences, and that nothing recursive is ever small!

By analogy, Common Form's data model is a bit like Git's, but more strict. Git has two kinds of content-addressed data, structured trees and unstructured blobs. Common Form only hash-addresses one kind of data, schema-compliant form objects. SHA-256 instead of SHA-1.

As an example, here is a draft form contract:


commonform.org will serve a single-page application that resolves the triplet of publisher name kemitchell, project name switchmode, and version-like "edition" 1e2d to the Merkle root of the full tree. It then GETs the tree from the server and renders. The front-end stack is mostly pieces of choo.

If you click the § symbol next to any part of the form, you'll focus it, revealing the hash of the sub-form. If you click the text of any sub-form and change it, you'll see the app rehash. It's a live structure editor for a Merkle tree in the browser, saving edited states in local storage as you go.

By default, commonform.org enables a number of "annotators", akin to code linters, that scan form content and call out structural errors, like broken cross-references, and stylistic faults, like "hereinbefore" and other blacklisted words. These are noted with flag symbols in the right margin. Clicking the flag focuses the form, revealing the annotations in context. Just as I've been using jslint/eslint/standard and friends for several years now, and still get scolded by them all the time, I've been writing contracts with annotators for several years, and couldn't imagine working without them now.

The server software computes views of an underlying append-only, TCP, pull-protocol log store, tcp-log-store. The server API addresses a few confidentiality challenges:

Saving a form to the server merely inserts it into the big, common SHA-256 keyspace of known forms. It doesn't associate any metadata with the form. Clients can GET specific trees by hash, but can't request an index of existing hashes. This makes it possible to post a form to the web, and receive a canonical URL for it, without sharing to the broader universe. If the form properly employs fill-in-the-blanks where information about parties and specific deal terms belong, it becomes possible to save drafts of negotiated agreements online. The software automates combining private details with public, generic forms, to make complete documents.

Publishing a form associates all kinds of metadata with it---publisher, project name, edition, and so on. This makes it easy to refer to a project:



The server will list all publications, and indexes them for various searches, like "defines such-and-such term" or "contains such-and-such other form". When viewing a form on commonform.org, the interface will tell you who has published the form under what name. This makes it possible to recognize work you've seen before, drop it into new forms by reference, and recall conclusions forms you've seen before, without rereading them.

I've used Common Form tools in my law practice for quite some time now---the two grew up together---and couldn't do what I do, or enough of it to last, without them.

That being said, I've struggled with one very important part of the project. The plan was always a web interface that other lawyers might actually use---that's what dragged my into this Node.js mess in the first place! But the temptation has always been to focus on the command-line tools, which I would personally prefer to use, even if the web interface were brilliant. Which it isn't.

I've been pretty heads-down on project, not to mention law practice, for a few years now. I'm at the point where it's time to start talking about the project, and offering it out as something other folks can use and help improve.

If you have thoughts, I'd love to hear them! Questions are also very, very welcome, especially if you think they may be "obvious" or "basic".

I'd also love pointers to events where a talk about Common Form might be welcome. I've mostly avoided the conference circuit these past few years, since leaving Austin, Texas---a land perennially ravaged by out-of-towner conferencegoers---but I think that should be part of my life again.

@gandalf in #SWARM
Re: %Ofx95lEz7

Nobody doing blockchain stuff gets to make fun of my lawyerly gobbledygook ever again.

What do I know, right? They project all the Values.

It sounds like bullshit. A swarm of flies circles above, incessantly buzzing buzzwords.


Their "white paper" says "DRAFT, NOT FOR PUBLIC CIRCULATION" on the first page. It's on scribd. I'm pretty sure I've seen a more recent draft elsewhere. It's a conference report on token fundraising.

Their marketing page calls tokens "shares" and "equity" and "crowdfunding". They didn't read the whitepaper.

Under "decentralized":

Instead of a traditional business structure (like corporations or non-profits), projects on the Swarm platform are "Distributed Collaborative Organizations" (DCOs) where all supporters are entitled to equal sharing opportunities. This is democracy for business of the future.

The most "traditional" business structure is the partnership. By default, in partnerships, partners share equally in profits and losses, and have full authority to take action for the partnership. In most American states, you create a partnership just by carrying on a business for profit with another person. You don't have to file with the government. You don't need a computer.

They don't even have the basics. Corporations and non-profits are not exclusive categories. In the United States, the vast majority of non-profits I've seen and worked with are corporations.

So many of these. I've seen so, so many of these ICO gold rush marketing pages. Why do they still hurt?

It's part of my job to know the detailed legal rules, some of which are very niche, and hard to decipher. It's very natural for some of us to specialize there, so other can focus elsewhere. But nobody needs super lawyer secrets or years of formal education to spot basic misses. An open mind and fifteen minutes on Wikipedia would do. That's more homework than these people seem to have done.

Kind of ranting here. Absolutely none of it's pointed at you, @cel! I appreciate you posting, even though it pains me.

@gandalf in #ssbpm
Re: %f9xdqPtVR

yarn's and npm@5's lockfiles are not strictly equivalent. IIRC, package-lock.json describes a node_modules tree that satisfies package.json, while yarn.lock describes a resolution of all dependencies, but leaves construction of the tree up to the particular yarn implementation.



@gandalf in #programming
Re: %PvS1qTENE

@mix, thanks for the feedback! I've definitely reverted to a simple arrow on the run button. That was the original plan, and with a few testers, it seems to make the operating principle clearer.

The string-var distinction is a very, very good observation. Hopefully I can get most folks thinking that way with very careful choice of variable names and string values. Oddly enough, I'm leaning toward the most math-like variable names possible, single-alpha style. I also want to keep the number of provided string values down to a minimum. Hopefully just a few common animal names or somesuch.

@gandalf in #programming
Re: %PvS1qTENE

@regular, thanks for your message.

The dependency-graph concept is really easy to see from a programming background. It definitely figured into how I've approached mapping out the order of topics in the curriculum, which I think I've got sorted now.

Then again, I think it's hard-to-impossible to know what you don't know that other people won't know ... if you catch my drift. Some analogies and connections that are very clear to me, like string[index] to array[index], don't jump out to many very bright learners. I seriously wonder whether coming up with graphs and coding quick scripts to process and check them isn't doing more to help mask my insecurity in my teaching ability than helping any hypothetical learners.

As for P2P learning, well, I've seen many good examples of "peer learning" --- learners helping and teaching other learners. freecodecamp.com is a great example. But I've also seen how setup and system-software requirements---especially anything requiring installation---take a toll in lost and foregone learners. Even the great online teaching tools tend to be pretty unusable on mobile devices, for example. I'd think making that jump would do far more for learners than changing implementation details like P2P versus client-server.

I've got a lot of challenges to write. I'm looking forward to getting those done, so I can start inviting learners in earnest. Then I think my plan is watch, watch, watch, and listen.

@gandalf in #programming

Teaching JavaScript, Without Teaching

Too Long; Demo Please: https://js.kemitchell.com

Source: https://github.com/kemitchell/js.kemitchell.com

The demo isn't complete. Mostly, it needs more challenges. But I'm hoping the interface---and the presentation---are mostly done. I hope it will remain just that simple. An editor with code, some of which you can't edit, the rest of which you can. A button that runs the code. A readout comparing the output to a target for the exercise. Make changes in the editor and click the button until you get the required output. Proceed.

I've done NodeSchool sessions in Oakland and San Francisco for quite a while now. I've met a lot of inspiring learners, especially folks completely new to programming. I wish I could still see all this as they do.

I've also seen enough people stuck to know that resources like javascripting set up a great many potential stumbling blocks. As do other resources, like Eloquent JavaScript or freeCodeCamp's JS curriculum. I'm so thankful for the folks behind those resources, many of whom I've had a chance to help directly, or support in other ways. Often, folks end up starting with a resource that definitely isn't meant for them, such as a tutorial for folks who are already programming in some other language.

I'm experimenting with a new-programmer learning tool on a few premises:

  1. Browser or bust.

    Avoid starting by installing Node.js. Eventually folks should do that, but while folks are feeling motivated, we should get them chipping away at the fundamental concepts as soon as possible, so the iron is hot as possible.

  2. No explanation. No introductions. Just code. English in some variable names and strings, but short, abstract names wherever practical.

    This approach is partly inspired by non-native speakers of English I've seen working through various curricula. It's much easier to read English than to speak it, but many of these learners have developed the habit of skipping the prose right to the code and exercises.

    Prose can be helpful, but for many learners, it's just more opportunity to stumble on an unfamiliar word or concept. For example, some of the writing in Eloquent JavaScript is, frankly, beautiful. But I strongly suspect it's more fun for folks who already know how to program, considering materials for others, than the intended audience. I actually bought a copy of Eloquent JavaScript about a week ago, got right loose on the old alky-hol, and dove in. I was able to pick out a few post-its worth of jargon that I sure wouldn't have understood as a beginner, well ahead of any code examples or exercises.

  3. Ruthlessly manage how many new concepts learners see in action at once.

    The data files for the challenges have tags for the concepts shown in provided code, as well as those needed to make the changes to pass. That makes it easy to show hints, as well as run automatic reports showing what gets introduced when.

  4. Show a concept in use as many times as practical before expecting learners to type out a use of it themselves.

    I think the pace can accelerate, depending on the material. So I'm not too worried about a - b immediately after showing that a + b is a thing. But I'll want to give a much longer "running start" for presumably novel concepts like return values, arguments, closures, and so on.

  5. Cull JavaScript down to the absolute minimum needed to become self-sufficient.

    This has been pretty fun, as a relatively old hand. A few examples: while as the only looping construct. No need for for or do ... while. Just > is fine. Can reverse operand order, instead of <. Just object[key]. They can figure out object.key in a split-second when the time comes.

The hardest questions so far have been about when to introduce what --- in what order. true and false before or after if, else if, and else? How much of String before moving on to Number or true and false or even Array or Object?

I'd love to learn from others' teaching experience!

@gandalf dug BioBricks Foundation just announced that they will be making 10 megabases o in #biohacking
Re: %/gLJ98YIh

@myf: Can't help mentioning. The Poker keyboards are excellent. I've had one with clear switches---the heavy, tactile, click-free ones---for years. Put some flatter blank keycaps on it years ago, and have pounded on it daily since. Common Form, the database stuff, the licensing packages --- I've done all the best coding of my life on that board.

Heavy for constant foot travel, though. Filco makes a nice mini in sturdy plastic, with Bluetooth, too.

@gandalf in #opensource
Re: %uOLTP66Vx

I can't name names or share details, but I'm seeing good take-up on big parts of this form. Mostly in negotiations on older versions or variants of the public form, from before public release. The biggest feature to test is almost certainly the open release process.

@gandalf in #git-ssb
Re: %aUFiyLv/a

I'd take patches on thumb drive via carrier pigeon from almost everyone I know who's on ssb. ;-P

@cel, I'm almost sure you're thinking about ssb stuff, not my rando repos. But it's a chance to send you a ++ anyway.

@gandalf dug In Singapore at NUS for [SB7](http://sb7.info/). Any scuttlers want to meet in #traveling
@gandalf pushed to %HQcMW5sNo...
@gandalf in #opensource
Re: %uOLTP66Vx

@mixmix, thanks!

The README could definitely use revisiting for more "pop", when I get the chance. For now, I've moved the "New and Novel" section, which contains the two bits you highlighted, up above the notes on structure.

The structure is pretty typical, at least for those already using some kind of decent form contract. The new bits are more important. They should go first.

@gandalf dug Nice work, I like the idea of this. From a lazy developer perspective, the in #opensource
@gandalf pushed to %HQcMW5sNo...
@gandalf pushed to %e2faoJVYt...
@gandalf pushed to %HQcMW5sNo...
  "type": "git-repo",
  "name": "law-form-license"
@gandalf dug All the battlemesh talks are being streamed [here](https://www.youtube.com/ in #mesh
@gandalf in #scuttlebutt
Re: %rBZQB3gcv

dev.kemitchell.com upgraded. So far, so good.

@gandalf followed @Mikey
@gandalf in #opensource

Switchmode Developer Agreement

I've finally released a first draft of a long-running form contract project, the Switchmode Developer Agreement, for open-source developers doing a mix of open and closed client work. In various places:




The form uses a plain-English style, to make it as easy as possible for non-lawyer folk to understand and help improve. Of course, I'm also very interested in convincing company people to run it by their contracts lawyers. I've put a lot of thought into this, but it's still entirely the work of one hand and one mind.

I will probably do a blog post fairly soon, but the README gives a good capsule summary of the major features, structural, legal, and practical. In the meantime, I'd be very grateful for feedback and reactions.

It doesn't feel quite right to call this a "labor of love", though there was definitely a lot of labor, and I can't imagine much money in it! Hopefully it will do some good. I think it's overdue.

@gandalf dug # flume is merged and scuttlebot@10.0.0 is published :boom: in #scuttlebutt
@gandalf pushed to %HQcMW5sNo...
@gandalf pushed to %HQcMW5sNo...
  "type": "git-repo",
  "name": "switchmode"
@gandalf pushed to %/TDXpkBsD...
@gandalf in #cooperative
Re: %uSJz6Ikbj

Thanks for sharing!

@gandalf dug # Coop sub-contracting _Notes from a conversation between [Protozoa](http: in #cooperative
@gandalf pushed to %Uw9KQgvAA...
  "type": "git-repo",
  "name": "lamos-to-json.js"
@gandalf pushed to %9XpOg8jxv...
  "type": "git-repo",
  "name": "json-to-lamos.js"
@gandalf pushed to %9VBA7OsdK...
  "type": "git-repo",
  "name": "concat-lamos-stream.js"
@gandalf pushed to %/TDXpkBsD...
@gandalf followed @Kas
@gandalf pushed to %/TDXpkBsD...
@gandalf subscribed to channel #cooperative
@gandalf changed something in about
@gandalf followed @Berkeley
@gandalf pushed to %/TDXpkBsD...
@gandalf pushed to %/TDXpkBsD...
@gandalf pushed to %/TDXpkBsD...
@gandalf in #programming
Re: %UwkwRwhdH

I think I found a subset of the better syntax I wanted that I could implement without restructuring the token parser completely. It's up all over the place as 2.0.0.


- - - a: w
      b: x
  - c:
    - y
- z
@gandalf pushed to %/TDXpkBsD...
@gandalf in #programming
Re: %UwkwRwhdH

@dominic, thanks!

On Yamlish:

I wasn't aware of this. Interested to do a few experiments, mostly on syntax. For example, YAML supports "inlining" nested structures in various ways that LAMOS does not. One example:

- key: value
  another key: value
- [a, "b", c]
  key: value
  another key: value
  - a
  - b
  - c

I wonder what yamlish supports. Hard to tell from README, and no tests, so it's REPL time.

On seek-and-read:

I strongly suspect YAML's inline syntax might frustrate reading from arbitrary line breaks. Insofar as JSON is valid YAML, well-formed YAML can pose the JSON problem. But "idiomatic", Ruby-style YAML may not. I'm not sure.

On feature set:

From JSON, LAMOS lacks nulls, booleans, and numbers. It gains strict white space rules and line comments.

I haven't missed any of the omissions from JSON. Nulls are easy to encode by the absence of an otherwise expected map key. Other types can be "wrapped":

- string
  type: boolean
  encoded: false
- another string
  type: scinum
  encoded: 1.2345e10

In the end, we can show all binary data as numbers, and we can show all binary data as text. Most people prefer text. The universal interface. Speaking of...

On multiline strings:

This was an initial stumbling block for me. I wanted something like YAML's pipe syntax. I could probably get there with a handwritten lexer and generated parser. But cleaning up the generated parser to make it readable would break my rule.

Fortunately, I've found splitting multiline line strings into lists of strings works 99% of the time, and often better than "\n" and similar, even in JSON. Come to think of it, test examples for LAMOS (examples.json) use that approach. All the LAMOS markup is arrays of line-strings. It does presume some post-parsing processing, which is not always doable.

On reading and typing:

Even with all manner of utilities and editor shortcuts for JSON, I really, really miss how easy YAML is to type. The syntax is just wonderful.

The only flaw in current LAMOS, to my eye, is map-in-list:

- a
- b
  x: c
  y: d

The newline before the map is mandatory. You can't put x: c on the preceding line, after the hyphen. That eats a lot of screen real estate. But perhaps it gets us seek-and-write, as you mentioned?

If I allow various shorthands, I'll really want recursive parsing logic, rather than the super-simple, sequential parser approach I'm getting away with now.

- a
- b
- x: c
  y: d
- - a
  - b
- x:
    - a
    - b
  y: c
- - - - a  
      - b

Give a coder a little bit of sugar, the whole language becomes fizzy cola in no time.

@gandalf in #programming

Lists and Maps of Strings

I finally got around to tackling a long-planned project. I'd love feedback!



Basically, it's YAML stripped down the barest essentials: lists, maps, strings, comments. That's it.

The resulting syntax is easy to type, human-readable, and YAML pretty, without being YAML complex:

  - plain-text
  - line-delimited
  - spaces
  - two at a time
# This is a comment.

# The parser ignores blank lines.
  - list item
  # You must start a new line and indent
  # for maps within lists.
    item key: and value
    another key: and another value
    still another:
      - with a list!

The API is increasingly cluttered, out of an abundance of first-push enthusiasm. But the basics are JSON-esque: lamos.stringify() and lamos.parse(). There is also a lamos.stableStringify() that sorts keys, some stream constructors, and a couple CLI tools for JSON interop.

One of the goals was to keep the syntax simple enough that I could write a parser by hand in one sitting. That plus a test suite in structured data should make it fairly easy to port, if it ever comes to that.

I believe the only glaring inadequacy is lack of support for escaping the control characters, : and -. I'm going to see how long I can get by without that, but medium term, I think it's inevitable.

@gandalf pushed to %/TDXpkBsD...
  "type": "git-repo",
  "name": "lamos.js"
@gandalf dug [sbotc](%133ulDgs/oC1DXjoK04vDFy6DgVBB/Zok15YJmuhD5Q=.sha256): a command-li in #scuttlebutt
@gandalf in #distributedtech
Re: %Aq4JfolmL

@dinosaur, Thanks for pasting in those footnotes! I see one name repeating a lot --- Stuart Haber.

@gandalf dug the [Bitcoin whitepaper](&sWdBkaiOxc3XM+QkCoGAMQXcQS1sZwjVOrlPwkj09VM=.sha2 in #distributedtech
@gandalf in #distributedtech


Anyone seen good work or writing on distributed timestamps? I'm particularly interested in cryptographic, or failing that, network-based solutions to timestamps to prevent backdating. Forward dating would be a non-issue.

I've dug around, but found myself mired in the X.509 swamps. There are some good ideas in there, but they're wrapped in layers and layers of spec, domain-specific vocabulary, and abstraction. I'd love to find something more "fundamental".

@gandalf subscribed to channel #distributedtech
@gandalf in #time
Re: %iqR8vwC76

Think how old Julian must feel!

@gandalf in #russian
Re: %T2M9HhjJX

Точнее, оно рассчитано на централизованную разработку этого самого multihash.

В смысле --- определенная группа людей разрабывает алгоритм хеш-функция + кодировка + хеш => multihash? Так что словарь префиксов, как таков, --- централизованный. Несмотря на то что он распределяется в форме ис. кода клиента.

@gandalf dug В SSB приятный фомат представления хэшей. Мне нравится куда больше чем IPFS in #russian
@gandalf dug This pull request seems to be my first not-to-my-project contribution on Gi in #opensource
@gandalf subscribed to channel #opensource
@gandalf in #software

Open Source: Theory of Operation

I've written a medium-length introduction to open source licensing, published here:


The guide takes what I might call a functional perspective, abstracting what the legal terms of open source licenses try to accomplish, starting from the baseline of the law's default rules about IP. The goal was to cover just enough of the background law to understand why licenses do what they do, and how they're broadly similar, even across categories like "permissive" and "copyleft".

Hopefully folks will find it helpful.

@gandalf in #solarpunk
Re: %QJUpoKWmd

@juul I still feel like a newb. Fortunately, easy to be a newb these days. They don't explode when you screw up anymore!

@gandalf dug this in #russian
@gandalf subscribed to channel #russian
@gandalf in #traveling
Re: %4EUHcaP/p

Reminds me of the Houston tunnels:


Anecdotally, it seems like cities with sufficiently terrible climates---hot or cold---eventually get some kind of avoid-the-nature tunnels. Some go up. Some go down. Dunno how it works out, who gets what.

As far as the names go, "skyway" is definitely the most fun.

@gandalf in #solarpunk
Re: %QJUpoKWmd

@juul: I have a Fagor cooker, as well. Had it a few years. Miles ahead of the old cookers, but still takes some finesse. Just a few tips that you've probably already found yourself:

  1. It pays to clean and lightly oil the gasket before each use. It's really easy to twist or stretch the gasket as you close the lid, creating a vacuum leak that's only apparent as you approach cooking pressure. Especially as it gets older.

  2. It's worth cleaning the regulator on each wash, too. Nobody gets hurt when it fails---big improvement, there---but the meal might be DoE.

  3. Even when the regulator assembly is clean, the pop-up indicator tends to stick. When I cook, I poke the indicator with a chopstick every once in a while as I wait for pressure to build. That helps to "unstick" it, so I can see it's time to turn down the heat before things get too far along.

@gandalf followed @indutny