Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Round out config reload #523

Merged
merged 12 commits into from
Jun 28, 2017
Merged

Round out config reload #523

merged 12 commits into from
Jun 28, 2017

Conversation

tylertreat
Copy link
Contributor

Implement config reload support for remaining options:

  • logtime
  • log_file
  • syslog
  • remote_syslog
  • max_connections
  • pid_file
  • max_control_line
  • max_payload
  • ping_interval
  • ping_max
  • write_deadline

@nats-io/core

@coveralls
Copy link

Coverage Status

Changes Unknown when pulling a0f32a1 on logtime_reload into ** on master**.

@tylertreat tylertreat mentioned this pull request Jun 27, 2017
6 tasks
Copy link
Member

@ColinSullivan1 ColinSullivan1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

server/reload.go Outdated
clients = make([]*client, len(server.clients))
i = 0
)
for _, client := range server.clients {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be useful to add a comment to the effect that map iteration is where the randomness is introduced - that feature of go may not be readily apparent to future contributors.

@coveralls
Copy link

Coverage Status

Changes Unknown when pulling 84d00a0 on logtime_reload into ** on master**.

Copy link
Contributor

@petemiron petemiron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approve with one comment for discussion. I'm not sure we should actively evict clients if max conns is adjusted, until we refine strategies for eviction. Otherwise looks great.

i = 0
)
// Map iteration is random, which allows us to close random connections.
for _, client := range server.clients {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we actively evict until we can offer strategies for eviction or some of the client hints we discussed previously (lame duck mode)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. Consensus was to leave this in for now. If you're decreasing max conns, there's probably a reason and you probably want the connections to decrease. I think this aligns with users' expectations.

Copy link
Member

@kozlovic kozlovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am worried about changing the type of exported fields.

@@ -381,7 +381,7 @@ type Varz struct {
*Info
*Options
Port int `json:"port"`
MaxPayload int `json:"max_payload"`
MaxPayload int64 `json:"max_payload"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be very careful here: this may break user code that embeds/uses public API to process monitoring. I would leave this as int (see below)

server/opts.go Outdated
@@ -54,7 +54,7 @@ type Options struct {
HTTPSPort int `json:"https_port"`
AuthTimeout float64 `json:"auth_timeout"`
MaxControlLine int `json:"max_control_line"`
MaxPayload int `json:"max_payload"`
MaxPayload int64 `json:"max_payload"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as monitor, this is a breaking change.

server/opts.go Outdated
@@ -245,7 +245,7 @@ func ProcessConfigFile(configFile string) (*Options, error) {
case "max_control_line":
opts.MaxControlLine = int(v.(int64))
case "max_payload":
opts.MaxPayload = int(v.(int64))
opts.MaxPayload = v.(int64)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would leave it as a int.

// below the limit if necessary.
func (m *maxConnOption) Apply(server *Server) {
server.mu.Lock()
var (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be as fast and more Go like to do:

var clients []*client
clients = append(clients, server.clients...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't work since server.clients is a map, not a slice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh right, forgot about that.

server/reload.go Outdated
// setting.
type maxPayloadOption struct {
noopOption
newValue int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep it a int here too.

server/reload.go Outdated
server.info.MaxPayload = m.newValue
server.generateServerInfoJSON()
for _, client := range server.clients {
atomic.StoreInt64(&client.mpay, m.newValue)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only here, cast it to int64, as in: atomic.StoreInt64(&client.mpay, int64(m.newValue))

server/reload.go Outdated
case "maxcontrolline":
diffOpts = append(diffOpts, &maxControlLineOption{newValue: newValue.(int)})
case "maxpayload":
diffOpts = append(diffOpts, &maxPayloadOption{newValue: newValue.(int64)})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be a int

}
if updated.WriteDeadline != 2*time.Second {
t.Fatalf("WriteDeadline is incorrect.\nexpected 2s\ngot: %s", updated.WriteDeadline)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A reason why we don't test MaxPayload here?

server/server.go Outdated
@@ -36,7 +36,7 @@ type Info struct {
SSLRequired bool `json:"ssl_required"` // DEPRECATED: ssl json used for older clients
TLSRequired bool `json:"tls_required"`
TLSVerify bool `json:"tls_verify"`
MaxPayload int `json:"max_payload"`
MaxPayload int64 `json:"max_payload"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, would not change that.

server/client.go Outdated
@@ -94,7 +94,7 @@ type client struct {
opts clientOpts
start time.Time
nc net.Conn
mpay int
mpay int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to top of structure (say after stats) because of this: golang/go#599

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To see if this is still relevant, you could build the 32 bits (on a Linux VM) and try to run it and see if it crashes (that is, if you don't move the field)

@coveralls
Copy link

Coverage Status

Changes Unknown when pulling 2a7af1b on logtime_reload into ** on master**.

@tylertreat
Copy link
Contributor Author

Trying to track down what's causing these intermittent build failures...

Copy link
Member

@kozlovic kozlovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once you figure out what causes those failures (I assume they are independent of the changes in this PR)

server/client.go Outdated
@@ -94,7 +94,7 @@ type client struct {
opts clientOpts
start time.Time
nc net.Conn
mpay int
mpay int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To see if this is still relevant, you could build the 32 bits (on a Linux VM) and try to run it and see if it crashes (that is, if you don't move the field)

// Apply is a no-op because the write deadline will be reloaded after options
// are applied.
func (w *writeDeadlineOption) Apply(server *Server) {
server.Noticef("Reloaded: write_deadline = %s", w.newValue)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we'd want this, but if String() is implemented for the options being reloaded then could log once after calling Apply here? https://github.com/nats-io/gnatsd/pull/523/files#diff-76ed437f66106035c2352c7500aa8606R544

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO logging in Apply gives more flexibility over what/how options are logged and also makes it harder to accidentally log something you shouldn't (like a password for example).

@tylertreat tylertreat merged commit 446cf96 into master Jun 28, 2017
@tylertreat tylertreat deleted the logtime_reload branch June 28, 2017 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants