Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: effective go semaphore example violates memory model #5023

Closed
kylelemons opened this issue Mar 11, 2013 · 8 comments
Closed

doc: effective go semaphore example violates memory model #5023

kylelemons opened this issue Mar 11, 2013 · 8 comments
Milestone

Comments

@kylelemons
Copy link
Contributor

According to the thread:
https://groups.google.com/d/msg/golang-nuts/Ug1DhZGGqTk/tjKg9sMTdlcJ

The semaphore example in Effective Go is not precisely correct:
http://tip.golang.org/doc/effective_go.html#channels

In particular, this quote from Dmitry:
"It's perfectly fine to allow any code to hoist above [a buffered] chan send, or to
sink below [a buffered] chan receive."
https://groups.google.com/d/msg/golang-nuts/Ug1DhZGGqTk/WchLzVPtjEEJ

Thus, the following code:
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    process(r)  // May take a long time.
    <-sem       // Done; enable next request to run.
}

could legally be "optimized" into:
func handle(r *Request) {
    process(r)  // May take a long time.
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
}

or into:
func handle(r *Request) {
    sem <- 1    // Wait for active queue to drain.
    <-sem       // Done; enable next request to run.
    process(r)  // May take a long time.
}

The correct way to write it would be:
func handle(r *Request) {
    <-sem       // Wait for a token from the semaphore
    process(r)  // May take a long time.
    sem <- 1    // Allow another process to run
}
(with a corresponding loop to fill the channel with tokens at initialization)

If we want to make this documentation change before go1.1, I can do so, but I'm not sure
if it's even too late for this or not.
@adg
Copy link
Contributor

adg commented Mar 11, 2013

Comment 2:

We'll be accepting documentation fixes til right before the release. Please go ahead and
send a change, if Dmitry concurs this is an error. (It looks like it to me.)

Labels changed: added priority-soon, go1.1, removed priority-triage.

Status changed to Accepted.

@dvyukov
Copy link
Member

dvyukov commented Mar 11, 2013

Comment 4:

Yes, I concur. And the race detector as well (e.g. if you use chan-based mutex, the
detector will complain).

@kylelemons
Copy link
Contributor Author

Comment 5:

Dmitry, will the race detector complain about any use of channel-based mutex or
semaphore construction, or only ones which rely on sequential consistency?

@robpike
Copy link
Contributor

robpike commented Mar 11, 2013

Comment 6:

Status changed to Started.

@robpike
Copy link
Contributor

robpike commented Mar 12, 2013

Comment 7:

This issue was closed by revision 9dfcfb9.

Status changed to Fixed.

@enormouspenguin
Copy link

It's 2015 already and this issue is declared "Fixed" (and even closed) but how come the example on Effective Go is still the wrong one?

@DisposaBlackSwan
Copy link

@kimkhanh unless I mis-understood what you're asking, I believe this link should be relevant https://golang.org/doc/go1.3#memory

@enormouspenguin
Copy link

Sorry, I didn't know about that. How stupid I am to refer to a stale issue. It gets really confusing when it comes to memory model let alone multiple changes to it in different versions of the language. So does that means the quote from Dmitry: "It's perfectly fine to allow any code to hoist above [a buffered] chan send, or to sink below [a buffered] chan receive." is ineffective now and go compiler(s) is not allowed to reorder statements when it comes to the same situation with buffered channels?

@rsc rsc added this to the Go1.1 milestone Apr 14, 2015
@rsc rsc removed the go1.1 label Apr 14, 2015
@golang golang locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants