This commit adds an option to turn off all encryption. This is a mode
used for tests, debugging, achieving protocol implementation interop,
learning about how the protocol works (nc ftw), and worst case
networks which _demand_ to be able to snoop on all the traffic.
(sadly, there are some private intranets like this...). (We should
consider at least _signing_ all this traffic.)
Because of the severity of this sort of thing, this is an
all-or-nothing deal. Either encryption is ON or OFF _fully_.
This way, partially unencrypted nodes cannot be accidentally left
running without the user's understanding. Nodes without encrypted
connections will simply not be able to speak to any of the global
bootstrap nodes, or anybody in the public network.
License: MIT
Signed-off-by: Juan Batiz-Benet <juan@benet.ai>
- add extra check to dialblock test
- move filter to separate package
- also improved tests
- sunk filters down into p2p/net/conn/listener
License: MIT
Signed-off-by: Jeromy <jeromyj@gmail.com>
Signed-off-by: Juan Batiz-Benet <juan@benet.ai>
Except when there is an explicit os.Exit(1) after the Critical line,
then replace with Fatal{,f}.
golang's log and logrus already call os.Exit(1) by default with Fatal.
License: MIT
Signed-off-by: rht <rhtbot@gmail.com>
Different mutliaddrs is not enough. Nodes may share transports.
NAT port mappings will likely only work on the base IP/TCP port
pair. We go one step further, and require different root (IP)
addrs. Just in case some NATs group by IP. In practice, this is
what we want: use addresses only if hosts that are on different
parts of the network have seen this address.
If the same peer observed the same address twice, it would be
double counted as different observations. This change adds a map
to make sure we're counting each observer once.
This is easily extended to require more than two observations,
but i have not yet encountered NATs for whom this is relevant.
We had a very nasty problem: handshakes were serial so incoming
dials would wait for each other to finish handshaking. this was
particularly problematic when handshakes hung-- nodes would not
recover quickly. This led to gateways not bootstrapping peers
fast enough.
The approach taken here is to do what crypto/tls does:
defer the handshake until Read/Write[1]. There are a number of
reasons why this is _the right thing to do_:
- it delays handshaking until it is known to be necessary (doing io)
- it "accepts" before the handshake, getting the handshake out of the
critical path entirely.
- it defers to the user's parallelization of conn handling. users
must implement this in some way already so use that, instead of
picking constants surely to be wrong (how many handshakes to run
in parallel?)
[0] http://golang.org/src/crypto/tls/conn.go#L886
We now consider debugerrors harmful: we've run into cases where
debugerror.Wrap() hid valuable error information (err == io.EOF?).
I've removed them from the main code, but left them in some tests.
Go errors are lacking, but unfortunately, this isn't the solution.
It is possible that debugerros.New or debugerrors.Errorf should
remain still (i.e. only remove debugerrors.Wrap) but we don't use
these errors often enough to keep.
The same keys + nonces in secio were being observed. As described in
https://github.com/ipfs/go-ipfs/issues/1016 -- the handshake must
be talking to itself. This can happen in an outgoing TCP dial with
REUSEPORT on to the same address.
reuseport is a hack. It is necessary for us to do certain kinds of
tcp nat traversal. Ideally, reuseport would be available in go:
https://github.com/golang/go/issues/9661
But until that issue is fixed, we're stuck with this. In some cases,
reuseport is strictly a detriment: nodes are not NATed. This commit
introduces an ENV var IPFS_REUSEPORT that can be set to false to
avoid using reuseport entirely:
IPFS_REUSEPORT=false ipfs daemon
This approach addresses our current need. It could become a config
var if necessary. If reuseport continues to give problems, we should
look into improving it.
humanize bandwidth output
instrument conn.Conn for bandwidth metrics
add poll command for continuous bandwidth reporting
move bandwidth tracking onto multiaddr net connections
another mild refactor of recording locations
address concerns from PR
lower mock nodes in race test due to increased goroutines per connection