9 pages about "Golang 🐙"
What works well
- having a monorepo
- being protobuf-first and generating a lot of code
- the codebase was “big-refactor”-friendly, including several refactors that modified 50+ files at once
- we’ve learned a lot about:
- our project, the features, the roadmap, the difficulties, etc
- about our dependencies (IPFS, gomobile, react-native, BLE, etc.)
What needs to be improved
- The code was too complex to read
- The codebase was too complex to update safely
- There were not enough rules about:
- where to implement something, how to name things
- how to implement things
- Makefile rules, and CI can be improved
- The tests should be more reliable
- We need to learn more about our future protocol, for now, it’s only in our head, and we will undoubtedly fail to implement the v1 of the protocol, I prefer to fail fast!
Several blogposts, slides, repos, and videos later…
I passed the last three days reading blog posts, slides, repositories, and watching videos about what other people are doing right now.
Then, I looked back on Berty and my other projects and listed a set of rules I like the most.
As usual, a rule is something that can always have exceptions :)
- Focus on readability, it’s a very good pattern to check what the godoc looks like to know if the API seems easy to adopt.
- Avoid magic, no global vars, no
- Sharing logic / reusable business functionality is most of the time over-engineering
- Enumerate requirements at function constructors. Use dependency injection (not dependency containers!), make
go buildyour best friend; the logger should also be injected
- If your project is small enough, put everything at the root of the project -> mono package
- When you are creating a very powerful and complex library, it can be a good thing to make its little-sister library that will wrap the most permissive one in a light opinionated library
- Embrace middlewares to lose coupling for timeout handling, retry mechanisms, authentication checks, etc
- Reduce comments, focus on useful variable and function naming
- Function and variable names are important to review
- Limit the number of packages, the number of functions, the number of interfaces
- Keep things simple and do not split into too many components at the beginning, split only because of a problem, not because of an anticipation
- Try always to have a minimal indentation level
- Use short function and variable names
- Variables can even be one or two letters long (initials) when used near to their initialization
- Receiver name should always be 1 or 2 letters long
- Prefer synchronous functions to asynchronous ones, it’s easy to make an asynchronous wrapper over asynchronous function, not the opposite
- Use named results (
return) for documentation
- Be flat, only use
pkg/for packages you want other people to use, and
internal/for the code your implementation details; most of the code should start in
internal/before being moved to
pkg/, only after you are sure it can be useful for someone else and after it becomes mature enough, so it has less risk of changing.
- use feature-flags to configure the app, feature-flags are “documentation”! They also allow you to have (multiple) (unfinished) (long-running) experiments merged more quickly
- Flags should be taken into account in this order: CLI > config > env
- use a structured logger, bind it with std logger (https://github.com/go-kit/kit/tree/master/log#interact-with-stdlib-logger)
- If your repo uses multiple main languages, they should be namespaced in their directory to make everything easier to manipulate for the tools.
- Put your .proto files in an
api/directory, but you can configure them to generate files in your existing go packages.
- Go routines
- Should always have a well-defined lifecycle
- You can use https://godoc.org/github.com/oklog/run
- Look at those patterns: Nursery, Futures, Scatter/Gather
- Package names should be:
- the same as the directory name (always)
- singular, lowercase, alpha-num
- unique in your project; unique with go core packages too, if possible
-racewhen building and testing from the beginning
context.Valueis only used for request-scoped information and only when it can’t be passed in another way
- Do not hesitate to pass
context.Contextas the first var of most of your functions (I need to investigate more and have a more strict rule here)
- Always put a
doc.gofile in the
pkg/*packages to configure the package vanity URLs and put some documentation. When your package has multiple go files, it will be easier to know where to edit those things
- Avoid having too many interfaces, and when doing some, try to always declare them in the caller package, not the implementer one
go testshould always work after a fresh clone! If you have unreliable/specific tests, use flags, env vars
- The tests should be easily readable and explaining, it’s probably the best place to “document” the edge cases of your library
- Use table-driven tests a lot
- If you are manipulating test-fixtures often, you can add a test
- If you write mock, they should be implemented in the same package than the real implementation, in a
testing.gofile; a mock should, in general, return a fully started in-memory server.
- If you need to write tests at runtime, you can use http://github.com/mitchellh/go-testing-interface
- If you have a complex struct, i.e., a server, do not hesitate to add a
Test boolfield that configures it to be testing-friendly
- When testing complex structs, compare a string representation (JSON, or something like that)
- Only test exported functions; unexported functions are implementation details
- If you write helpers, they should not return an error but take
testing.Tas an argument and call
- Most of the rules defined here can be skipped entirely in the
internal/directory. This directory is the perfect place for things that changes often.
- Add a githook that run
- When is it better to have a
ListAllUsers() + ListUsersByGroup() + ListActiveUsers()...?
- What the best way of organizing code that involves multiple languages, i.e., bridges?
- When does it makes sense to have an
- When does it make sense to have a
modelpackage vs. a
Suggested project layout for the monorepo of a big project
* api/ * a.proto * a.swagger.json (generated) * b.proto * b.swagger.json (generated) * assets/ * logo.png * build/ * ci/ * script.sh * package/ * script.sh * configs/ * prod.json * dev.json * deployments/ * c/ * docker-compose.yml * d/ * docker-compose.yml * docs/ * files.md * examples/ * descriptive-dirname/ * ... * githooks/ * pre-commit * go/ * cmd/ * mybinary/ * main.go * internal * e/ * doc.go * e.go * f/ * doc.go * f.go * pkg/ * g/ * doc.go * g.go * h/ * doc.go * h.go * Makefile * go.mo * js/ * test/ * testdata/ * blob.json * tools/ * docker-protoc/ * Dockerfile * script.sh * Makefile * Dockerfile
Interesting links and quotes I loved
I however also run into cases where I end up accidentally writing Java-style interfaces - typically after I come back from a stint of writing code in Python or Java. The desire to overengineer and “class all the things” something is quite strong, especially when writing Go code after writing a lot of object-oriented code.
TL;DR — The House (Business) Always Wins – In my 15-year involvement with coding, I have never seen a single business “converge” on requirements. They only diverge. It is simply the nature of business and its not the business people’s fault.
TL;DR - Duplication is better than the wrong abstraction - Designs are always playing catch up to changing real-world requirements. So even if we found a perfect abstraction by a miracle, it comes tagged with an expiry date because #1 — The House wins in the end. The best quality of a Design today is how well it can be undesigned. There is an amazing article on write code that is easy to delete, not easy to extend.
TL;DR — Wrappers are an exception, not the norm. Don’t wrap good libraries for the sake of wrapping.
TL;DR — Don’t let <X>-ities go unchallenged. Clearly define and evaluate the Scenario/Story/Need/Usage. Tip: Ask a simple question — “What’s an example story/scenario?” — And then dig deep on that scenario. This exposes flaws in most <X>-ities.
Industrial programming means writing code once and maintaining it into perpetuity. Maintenance is the continuous practice of reading and refactoring. Therefore, industrial programming overwhelmingly favors reads, and on the spectrum of easy to read vs. easy to write, we should bias strongly towards the former.
Looking at interfaces as a way to classify implementations is the wrong approach; instead, look at interfaces as a way to identify code that expects common sets of behaviors.
Instead of making code easy-to-delete, we are trying to keep the hard-to-delete parts as far away as possible from the easy-to-delete parts.
Write more boilerplate. You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.
I’m not advocating you go out and create a /protocol/ and a /policy/ directory, but you do want to try and keep your util directory free of business logic, and build simpler-to-use libraries on top of simpler-to-implement ones. You don’t have to finish writing one library to start writing another atop.
Layering is less about writing code we can delete later, but making the hard to remove code pleasant to use (without contaminating it with business logic).
You’ve copy-pasted, you’ve refactored, you’ve layered, you’ve composed, but the code still has to do something at the end of the day. Sometimes it’s best just to give up and write a substantial amount of trashy code to hold the rest together.
Business logic is code characterized by a never-ending series of edge cases and quick and dirty hacks. This is fine. I am ok with this. Other styles like ‘game code’, or ‘founder code’ are the same thing: cutting corners to save a considerable amount of time.
The reason? Sometimes it’s easier to delete one big mistake than try to delete 18 smaller interleaved mistakes. A lot of programming is exploratory, and it’s quicker to get it wrong a few times and iterate than think to get it right first time.
the whole step 5 is <3
I’m not suggesting you write the same ball of mud ten times over, perfecting your mistakes. To quote Perlis: “Everything should be built top-down, except the first time”. You should be trying to make new mistakes each time, take new risks, and slowly build up through iteration.
Instead of breaking code into parts with common functionality, we break code apart by what it does not share with the rest. We isolate the most frustrating parts to write, maintain, or delete away from each other.; We are not building modules around being able to re-use them, but being able to change them.
When a module does two things, it is usually because changing one part requires changing the other. It is often easier to have one awful component with a simple interface, than two components requiring a careful co-ordination between them.
The strategies I’ve talked about — layering, isolation, common interfaces, composition — are not about writing good software, but how to build software that can change over time.
A common fallacy is to assume authors of incomprehensible code will somehow be able to express themselves lucidly and clearly in comments.
assh, formerly known as “Advanced SSH config”, is a smart tool that was designed to wrap tightly around your SSH and enhance it, like a superhero suit that has various gadgets installed. It adds regex, aliases, gateways, dynamic hostnames, graphviz, notifications, json output and yaml configuration.
Some of its configuration features are:
- regex support
- aliases -> gate.domain.tld
- includes: split configuration in multiple files
- gateways -> transparent ssh connection chaining
- inheritance: make hosts inherits from host hosts or templates
- variable expansion: resolve variables from the environment
- desktop notifications: based on events
- Graphviz representation of the hosts
assh manages your
~/.ssh/config file, taking care of keeping its backup.
A few usage examples:
assh config build: Rewrites and replaces the existing ~/.ssh/config file.
assh config graphviz: Generate a graphviz graph of the hosts
assh sockets list: List active control sockets.
assh sockets master: Create a master control sockets.
assh ping: Send packets to the SSH server and display stats.
Those are some of the highlights of assh. Visit its GitHub page to find out more about its configuration, usage and integration.