tcp: prepare fastopen code for upcoming listener changes

While auditing TCP stack for upcoming 'lockless' listener changes,
I found I had to change fastopen_init_queue() to properly init the object
before publishing it.

Otherwise an other cpu could try to lock the spinlock before it gets
properly initialized.

Instead of adding appropriate barriers, just remove dynamic memory
allocations :
- Structure is 28 bytes on 64bit arches. Using additional 8 bytes
  for holding a pointer seems overkill.
- Two listeners can share same cache line and performance would suffer.

If we really want to save few bytes, we would instead dynamically allocate
whole struct request_sock_queue in the future.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Eric Dumazet 2015-09-29 07:42:52 -07:00 committed by David S. Miller
parent 2985aaac01
commit 0536fcc039
9 changed files with 34 additions and 59 deletions

View file

@ -382,25 +382,11 @@ static inline bool tcp_passive_fastopen(const struct sock *sk)
tcp_sk(sk)->fastopen_rsk != NULL);
}
extern void tcp_sock_destruct(struct sock *sk);
static inline int fastopen_init_queue(struct sock *sk, int backlog)
static inline void fastopen_queue_tune(struct sock *sk, int backlog)
{
struct request_sock_queue *queue =
&inet_csk(sk)->icsk_accept_queue;
struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
if (queue->fastopenq == NULL) {
queue->fastopenq = kzalloc(
sizeof(struct fastopen_queue),
sk->sk_allocation);
if (queue->fastopenq == NULL)
return -ENOMEM;
sk->sk_destruct = tcp_sock_destruct;
spin_lock_init(&queue->fastopenq->lock);
}
queue->fastopenq->max_qlen = backlog;
return 0;
queue->fastopenq.max_qlen = backlog;
}
static inline void tcp_saved_syn_free(struct tcp_sock *tp)