Networking fixes for 6.5-rc2, including fixes from netfilter,

wireless and ebpf
 
 Current release - regressions:
 
   - netfilter: conntrack: gre: don't set assured flag for clash entries
 
   - wifi: iwlwifi: remove 'use_tfh' config to fix crash
 
 Previous releases - regressions:
 
   - ipv6: fix a potential refcount underflow for idev
 
   - icmp6: ifix null-ptr-deref of ip6_null_entry->rt6i_idev in icmp6_dev()
 
   - bpf: fix max stack depth check for async callbacks
 
   - eth: mlx5e:
     - check for NOT_READY flag state after locking
     - fix page_pool page fragment tracking for XDP
 
   - eth: igc:
     - fix tx hang issue when QBV gate is closed
     - fix corner cases for TSN offload
 
   - eth: octeontx2-af: Move validation of ptp pointer before its usage
 
   - eth: ena: fix shift-out-of-bounds in exponential backoff
 
 Previous releases - always broken:
 
   - core: prevent skb corruption on frag list segmentation
 
   - sched:
     - cls_fw: fix improper refcount update leads to use-after-free
     - sch_qfq: account for stab overhead in qfq_enqueue
 
   - netfilter:
     - report use refcount overflow
     - prevent OOB access in nft_byteorder_eval
 
   - wifi: mt7921e: fix init command fail with enabled device
 
   - eth: ocelot: fix oversize frame dropping for preemptible TCs
 
   - eth: fec: recycle pages for transmitted XDP frames
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmSv1YISHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkpQgP/1msj0MlIWJnMgzPiMonDSe746JGTg/j
 YengEjqcy3ozC4COBEeyBO6ilt6I+Wrb5H5jimn9h2djB+D7htWNaejQaqJrBxph
 F4lUC6OJqd2ncI3tXAG2BSX1duzDr6B7yL7d5InFIczw8vNh+chsyX0sjlzU12bt
 ppjcSb+Ffc796DB0ItJkBqluxcpjyXE15ZWTTV4GEHK6RoRdxNIGjd7NgvD8podB
 Q/464bHs1jJYkAavuobiOXV2fuxWLTs77E0Vloizoo+42UiRFMLJk+RX98PhSIMa
 eejkxfm+H6+6Qi2omYepvf7vDN3GtLjxbr5C3mTdWPuL4QbNY8agVJ7sS4XnL5/v
 B7EAjyGQK9SmD36zTu7QL/Ul6fSnRq8jz20B0mDa0imAWzi58A+jqbQAMoVOMSS+
 Uv4yKJpIUyx7mUI77+EX3U9r1wytw5eniatTDU+GAsQb2CJ43CqDmn/7RcmGacBo
 P1q+il9JW4kzUQrisUSxmQDfpBvQi5wiygiEdUNI5FEhq6/iKe/lrJnmJZpaLkd5
 P3oEKjapamAmcyrEr/7VD1Mb4jrRfpB7zVn/5OyvywbcLQxA+531iPpy4r4W6cWH
 1MRLBVVHKyb3jfm8J3T4lpDEzd03+MiPS8JiKMUYYNUYkY8tYp92muwC7z2sGI4M
 6eR2MeKD4vds
 =cELX
 -----END PGP SIGNATURE-----

Merge tag 'net-6.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from netfilter, wireless and ebpf.

  Current release - regressions:

   - netfilter: conntrack: gre: don't set assured flag for clash entries

   - wifi: iwlwifi: remove 'use_tfh' config to fix crash

  Previous releases - regressions:

   - ipv6: fix a potential refcount underflow for idev

   - icmp6: ifix null-ptr-deref of ip6_null_entry->rt6i_idev in
     icmp6_dev()

   - bpf: fix max stack depth check for async callbacks

   - eth: mlx5e:
      - check for NOT_READY flag state after locking
      - fix page_pool page fragment tracking for XDP

   - eth: igc:
      - fix tx hang issue when QBV gate is closed
      - fix corner cases for TSN offload

   - eth: octeontx2-af: Move validation of ptp pointer before its usage

   - eth: ena: fix shift-out-of-bounds in exponential backoff

  Previous releases - always broken:

   - core: prevent skb corruption on frag list segmentation

   - sched:
      - cls_fw: fix improper refcount update leads to use-after-free
      - sch_qfq: account for stab overhead in qfq_enqueue

   - netfilter:
      - report use refcount overflow
      - prevent OOB access in nft_byteorder_eval

   - wifi: mt7921e: fix init command fail with enabled device

   - eth: ocelot: fix oversize frame dropping for preemptible TCs

   - eth: fec: recycle pages for transmitted XDP frames"

* tag 'net-6.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (79 commits)
  selftests: tc-testing: add test for qfq with stab overhead
  net/sched: sch_qfq: account for stab overhead in qfq_enqueue
  selftests: tc-testing: add tests for qfq mtu sanity check
  net/sched: sch_qfq: reintroduce lmax bound check for MTU
  wifi: cfg80211: fix receiving mesh packets without RFC1042 header
  wifi: rtw89: debug: fix error code in rtw89_debug_priv_send_h2c_set()
  net: txgbe: fix eeprom calculation error
  net/sched: make psched_mtu() RTNL-less safe
  net: ena: fix shift-out-of-bounds in exponential backoff
  netdevsim: fix uninitialized data in nsim_dev_trap_fa_cookie_write()
  net/sched: flower: Ensure both minimum and maximum ports are specified
  MAINTAINERS: Add another mailing list for QUALCOMM ETHQOS ETHERNET DRIVER
  docs: netdev: update the URL of the status page
  wifi: iwlwifi: remove 'use_tfh' config to fix crash
  xdp: use trusted arguments in XDP hints kfuncs
  bpf: cpumap: Fix memory leak in cpu_map_update_elem
  wifi: airo: avoid uninitialized warning in airo_get_rate()
  octeontx2-pf: Add additional check for MCAM rules
  net: dsa: Removed unneeded of_node_put in felix_parse_ports_node
  net: fec: use netdev_err_once() instead of netdev_err()
  ...
This commit is contained in:
Linus Torvalds 2023-07-13 14:21:22 -07:00
commit b1983d427a
91 changed files with 1018 additions and 528 deletions

View file

@ -98,7 +98,7 @@ If you aren't subscribed to netdev and/or are simply unsure if
repository link above for any new networking-related commits. You may repository link above for any new networking-related commits. You may
also check the following website for the current status: also check the following website for the current status:
http://vger.kernel.org/~davem/net-next.html https://patchwork.hopto.org/net-next.html
The ``net`` tree continues to collect fixes for the vX.Y content, and is The ``net`` tree continues to collect fixes for the vX.Y content, and is
fed back to Linus at regular (~weekly) intervals. Meaning that the fed back to Linus at regular (~weekly) intervals. Meaning that the

View file

@ -17543,6 +17543,7 @@ QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org> M: Vinod Koul <vkoul@kernel.org>
R: Bhupesh Sharma <bhupesh.sharma@linaro.org> R: Bhupesh Sharma <bhupesh.sharma@linaro.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/net/qcom,ethqos.yaml F: Documentation/devicetree/bindings/net/qcom,ethqos.yaml
F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c

View file

@ -69,7 +69,7 @@ struct rv_jit_context {
struct bpf_prog *prog; struct bpf_prog *prog;
u16 *insns; /* RV insns */ u16 *insns; /* RV insns */
int ninsns; int ninsns;
int body_len; int prologue_len;
int epilogue_offset; int epilogue_offset;
int *offset; /* BPF to RV */ int *offset; /* BPF to RV */
int nexentries; int nexentries;
@ -216,8 +216,8 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx)
int from, to; int from, to;
off++; /* BPF branch is from PC+1, RV is from PC */ off++; /* BPF branch is from PC+1, RV is from PC */
from = (insn > 0) ? ctx->offset[insn - 1] : 0; from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len;
to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0; to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len;
return ninsns_rvoff(to - from); return ninsns_rvoff(to - from);
} }

View file

@ -44,7 +44,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
unsigned int prog_size = 0, extable_size = 0; unsigned int prog_size = 0, extable_size = 0;
bool tmp_blinded = false, extra_pass = false; bool tmp_blinded = false, extra_pass = false;
struct bpf_prog *tmp, *orig_prog = prog; struct bpf_prog *tmp, *orig_prog = prog;
int pass = 0, prev_ninsns = 0, prologue_len, i; int pass = 0, prev_ninsns = 0, i;
struct rv_jit_data *jit_data; struct rv_jit_data *jit_data;
struct rv_jit_context *ctx; struct rv_jit_context *ctx;
@ -83,6 +83,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
prog = orig_prog; prog = orig_prog;
goto out_offset; goto out_offset;
} }
if (build_body(ctx, extra_pass, NULL)) {
prog = orig_prog;
goto out_offset;
}
for (i = 0; i < prog->len; i++) { for (i = 0; i < prog->len; i++) {
prev_ninsns += 32; prev_ninsns += 32;
ctx->offset[i] = prev_ninsns; ctx->offset[i] = prev_ninsns;
@ -91,12 +97,15 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
for (i = 0; i < NR_JIT_ITERATIONS; i++) { for (i = 0; i < NR_JIT_ITERATIONS; i++) {
pass++; pass++;
ctx->ninsns = 0; ctx->ninsns = 0;
bpf_jit_build_prologue(ctx);
ctx->prologue_len = ctx->ninsns;
if (build_body(ctx, extra_pass, ctx->offset)) { if (build_body(ctx, extra_pass, ctx->offset)) {
prog = orig_prog; prog = orig_prog;
goto out_offset; goto out_offset;
} }
ctx->body_len = ctx->ninsns;
bpf_jit_build_prologue(ctx);
ctx->epilogue_offset = ctx->ninsns; ctx->epilogue_offset = ctx->ninsns;
bpf_jit_build_epilogue(ctx); bpf_jit_build_epilogue(ctx);
@ -162,10 +171,8 @@ skip_init_ctx:
if (!prog->is_func || extra_pass) { if (!prog->is_func || extra_pass) {
bpf_jit_binary_lock_ro(jit_data->header); bpf_jit_binary_lock_ro(jit_data->header);
prologue_len = ctx->epilogue_offset - ctx->body_len;
for (i = 0; i < prog->len; i++) for (i = 0; i < prog->len; i++)
ctx->offset[i] = ninsns_rvoff(prologue_len + ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);
ctx->offset[i]);
bpf_prog_fill_jited_linfo(prog, ctx->offset); bpf_prog_fill_jited_linfo(prog, ctx->offset);
out_offset: out_offset:
kfree(ctx->offset); kfree(ctx->offset);

View file

@ -1286,7 +1286,6 @@ static int felix_parse_ports_node(struct felix *felix,
if (err < 0) { if (err < 0) {
dev_info(dev, "Unsupported PHY mode %s on port %d\n", dev_info(dev, "Unsupported PHY mode %s on port %d\n",
phy_modes(phy_mode), port); phy_modes(phy_mode), port);
of_node_put(child);
/* Leave port_phy_modes[port] = 0, which is also /* Leave port_phy_modes[port] = 0, which is also
* PHY_INTERFACE_MODE_NA. This will perform a * PHY_INTERFACE_MODE_NA. This will perform a
@ -1786,16 +1785,15 @@ static int felix_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
{ {
struct ocelot *ocelot = ds->priv; struct ocelot *ocelot = ds->priv;
struct ocelot_port *ocelot_port = ocelot->ports[port]; struct ocelot_port *ocelot_port = ocelot->ports[port];
struct felix *felix = ocelot_to_felix(ocelot);
ocelot_port_set_maxlen(ocelot, port, new_mtu); ocelot_port_set_maxlen(ocelot, port, new_mtu);
mutex_lock(&ocelot->tas_lock); mutex_lock(&ocelot->fwd_domain_lock);
if (ocelot_port->taprio && felix->info->tas_guard_bands_update) if (ocelot_port->taprio && ocelot->ops->tas_guard_bands_update)
felix->info->tas_guard_bands_update(ocelot, port); ocelot->ops->tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
return 0; return 0;
} }

View file

@ -57,7 +57,6 @@ struct felix_info {
void (*mdio_bus_free)(struct ocelot *ocelot); void (*mdio_bus_free)(struct ocelot *ocelot);
int (*port_setup_tc)(struct dsa_switch *ds, int port, int (*port_setup_tc)(struct dsa_switch *ds, int port,
enum tc_setup_type type, void *type_data); enum tc_setup_type type, void *type_data);
void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
void (*port_sched_speed_set)(struct ocelot *ocelot, int port, void (*port_sched_speed_set)(struct ocelot *ocelot, int port,
u32 speed); u32 speed);
void (*phylink_mac_config)(struct ocelot *ocelot, int port, void (*phylink_mac_config)(struct ocelot *ocelot, int port,

View file

@ -1209,15 +1209,17 @@ static u32 vsc9959_tas_tc_max_sdu(struct tc_taprio_qopt_offload *taprio, int tc)
static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port) static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
{ {
struct ocelot_port *ocelot_port = ocelot->ports[port]; struct ocelot_port *ocelot_port = ocelot->ports[port];
struct ocelot_mm_state *mm = &ocelot->mm[port];
struct tc_taprio_qopt_offload *taprio; struct tc_taprio_qopt_offload *taprio;
u64 min_gate_len[OCELOT_NUM_TC]; u64 min_gate_len[OCELOT_NUM_TC];
u32 val, maxlen, add_frag_size;
u64 needed_min_frag_time_ps;
int speed, picos_per_byte; int speed, picos_per_byte;
u64 needed_bit_time_ps; u64 needed_bit_time_ps;
u32 val, maxlen;
u8 tas_speed; u8 tas_speed;
int tc; int tc;
lockdep_assert_held(&ocelot->tas_lock); lockdep_assert_held(&ocelot->fwd_domain_lock);
taprio = ocelot_port->taprio; taprio = ocelot_port->taprio;
@ -1253,14 +1255,21 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
*/ */
needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte; needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte;
/* Preemptible TCs don't need to pass a full MTU, the port will
* automatically emit a HOLD request when a preemptible TC gate closes
*/
val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port);
add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val);
needed_min_frag_time_ps = picos_per_byte *
(u64)(24 + 2 * ethtool_mm_frag_size_add_to_min(add_frag_size));
dev_dbg(ocelot->dev, dev_dbg(ocelot->dev,
"port %d: max frame size %d needs %llu ps at speed %d\n", "port %d: max frame size %d needs %llu ps, %llu ps for mPackets at speed %d\n",
port, maxlen, needed_bit_time_ps, speed); port, maxlen, needed_bit_time_ps, needed_min_frag_time_ps,
speed);
vsc9959_tas_min_gate_lengths(taprio, min_gate_len); vsc9959_tas_min_gate_lengths(taprio, min_gate_len);
mutex_lock(&ocelot->fwd_domain_lock);
for (tc = 0; tc < OCELOT_NUM_TC; tc++) { for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
u32 requested_max_sdu = vsc9959_tas_tc_max_sdu(taprio, tc); u32 requested_max_sdu = vsc9959_tas_tc_max_sdu(taprio, tc);
u64 remaining_gate_len_ps; u64 remaining_gate_len_ps;
@ -1269,7 +1278,9 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
remaining_gate_len_ps = remaining_gate_len_ps =
vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]); vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]);
if (remaining_gate_len_ps > needed_bit_time_ps) { if ((mm->active_preemptible_tcs & BIT(tc)) ?
remaining_gate_len_ps > needed_min_frag_time_ps :
remaining_gate_len_ps > needed_bit_time_ps) {
/* Setting QMAXSDU_CFG to 0 disables oversized frame /* Setting QMAXSDU_CFG to 0 disables oversized frame
* dropping. * dropping.
*/ */
@ -1323,8 +1334,6 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port); ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);
ocelot->ops->cut_through_fwd(ocelot); ocelot->ops->cut_through_fwd(ocelot);
mutex_unlock(&ocelot->fwd_domain_lock);
} }
static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port, static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
@ -1351,7 +1360,7 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
break; break;
} }
mutex_lock(&ocelot->tas_lock); mutex_lock(&ocelot->fwd_domain_lock);
ocelot_rmw_rix(ocelot, ocelot_rmw_rix(ocelot,
QSYS_TAG_CONFIG_LINK_SPEED(tas_speed), QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
@ -1361,7 +1370,7 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
if (ocelot_port->taprio) if (ocelot_port->taprio)
vsc9959_tas_guard_bands_update(ocelot, port); vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
} }
static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time, static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time,
@ -1409,7 +1418,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
int ret, i; int ret, i;
u32 val; u32 val;
mutex_lock(&ocelot->tas_lock); mutex_lock(&ocelot->fwd_domain_lock);
if (taprio->cmd == TAPRIO_CMD_DESTROY) { if (taprio->cmd == TAPRIO_CMD_DESTROY) {
ocelot_port_mqprio(ocelot, port, &taprio->mqprio); ocelot_port_mqprio(ocelot, port, &taprio->mqprio);
@ -1421,7 +1430,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
vsc9959_tas_guard_bands_update(ocelot, port); vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
return 0; return 0;
} else if (taprio->cmd != TAPRIO_CMD_REPLACE) { } else if (taprio->cmd != TAPRIO_CMD_REPLACE) {
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
@ -1504,7 +1513,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
ocelot_port->taprio = taprio_offload_get(taprio); ocelot_port->taprio = taprio_offload_get(taprio);
vsc9959_tas_guard_bands_update(ocelot, port); vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
return 0; return 0;
@ -1512,7 +1521,7 @@ err_reset_tc:
taprio->mqprio.qopt.num_tc = 0; taprio->mqprio.qopt.num_tc = 0;
ocelot_port_mqprio(ocelot, port, &taprio->mqprio); ocelot_port_mqprio(ocelot, port, &taprio->mqprio);
err_unlock: err_unlock:
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
return ret; return ret;
} }
@ -1525,7 +1534,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
int port; int port;
u32 val; u32 val;
mutex_lock(&ocelot->tas_lock); mutex_lock(&ocelot->fwd_domain_lock);
for (port = 0; port < ocelot->num_phys_ports; port++) { for (port = 0; port < ocelot->num_phys_ports; port++) {
ocelot_port = ocelot->ports[port]; ocelot_port = ocelot->ports[port];
@ -1563,7 +1572,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
QSYS_TAG_CONFIG_ENABLE, QSYS_TAG_CONFIG_ENABLE,
QSYS_TAG_CONFIG, port); QSYS_TAG_CONFIG, port);
} }
mutex_unlock(&ocelot->tas_lock); mutex_unlock(&ocelot->fwd_domain_lock);
} }
static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port, static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port,
@ -1634,6 +1643,18 @@ static int vsc9959_qos_query_caps(struct tc_query_caps_base *base)
} }
} }
static int vsc9959_qos_port_mqprio(struct ocelot *ocelot, int port,
struct tc_mqprio_qopt_offload *mqprio)
{
int ret;
mutex_lock(&ocelot->fwd_domain_lock);
ret = ocelot_port_mqprio(ocelot, port, mqprio);
mutex_unlock(&ocelot->fwd_domain_lock);
return ret;
}
static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port, static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port,
enum tc_setup_type type, enum tc_setup_type type,
void *type_data) void *type_data)
@ -1646,7 +1667,7 @@ static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port,
case TC_SETUP_QDISC_TAPRIO: case TC_SETUP_QDISC_TAPRIO:
return vsc9959_qos_port_tas_set(ocelot, port, type_data); return vsc9959_qos_port_tas_set(ocelot, port, type_data);
case TC_SETUP_QDISC_MQPRIO: case TC_SETUP_QDISC_MQPRIO:
return ocelot_port_mqprio(ocelot, port, type_data); return vsc9959_qos_port_mqprio(ocelot, port, type_data);
case TC_SETUP_QDISC_CBS: case TC_SETUP_QDISC_CBS:
return vsc9959_qos_port_cbs_set(ds, port, type_data); return vsc9959_qos_port_cbs_set(ds, port, type_data);
default: default:
@ -2591,6 +2612,7 @@ static const struct ocelot_ops vsc9959_ops = {
.cut_through_fwd = vsc9959_cut_through_fwd, .cut_through_fwd = vsc9959_cut_through_fwd,
.tas_clock_adjust = vsc9959_tas_clock_adjust, .tas_clock_adjust = vsc9959_tas_clock_adjust,
.update_stats = vsc9959_update_stats, .update_stats = vsc9959_update_stats,
.tas_guard_bands_update = vsc9959_tas_guard_bands_update,
}; };
static const struct felix_info felix_info_vsc9959 = { static const struct felix_info felix_info_vsc9959 = {
@ -2616,7 +2638,6 @@ static const struct felix_info felix_info_vsc9959 = {
.port_modes = vsc9959_port_modes, .port_modes = vsc9959_port_modes,
.port_setup_tc = vsc9959_port_setup_tc, .port_setup_tc = vsc9959_port_setup_tc,
.port_sched_speed_set = vsc9959_sched_speed_set, .port_sched_speed_set = vsc9959_sched_speed_set,
.tas_guard_bands_update = vsc9959_tas_guard_bands_update,
}; };
/* The INTB interrupt is shared between for PTP TX timestamp availability /* The INTB interrupt is shared between for PTP TX timestamp availability

View file

@ -588,6 +588,9 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data,
bool ack; bool ack;
int ret; int ret;
if (!skb)
return -ENOMEM;
reinit_completion(&mgmt_eth_data->rw_done); reinit_completion(&mgmt_eth_data->rw_done);
/* Increment seq_num and set it in the copy pkt */ /* Increment seq_num and set it in the copy pkt */

View file

@ -35,6 +35,8 @@
#define ENA_REGS_ADMIN_INTR_MASK 1 #define ENA_REGS_ADMIN_INTR_MASK 1
#define ENA_MAX_BACKOFF_DELAY_EXP 16U
#define ENA_MIN_ADMIN_POLL_US 100 #define ENA_MIN_ADMIN_POLL_US 100
#define ENA_MAX_ADMIN_POLL_US 5000 #define ENA_MAX_ADMIN_POLL_US 5000
@ -536,6 +538,7 @@ static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue,
static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us) static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us)
{ {
exp = min_t(u32, exp, ENA_MAX_BACKOFF_DELAY_EXP);
delay_us = max_t(u32, ENA_MIN_ADMIN_POLL_US, delay_us); delay_us = max_t(u32, ENA_MIN_ADMIN_POLL_US, delay_us);
delay_us = min_t(u32, delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US); delay_us = min_t(u32, delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US);
usleep_range(delay_us, 2 * delay_us); usleep_range(delay_us, 2 * delay_us);

View file

@ -1492,8 +1492,6 @@ int bgmac_enet_probe(struct bgmac *bgmac)
bgmac->in_init = true; bgmac->in_init = true;
bgmac_chip_intrs_off(bgmac);
net_dev->irq = bgmac->irq; net_dev->irq = bgmac->irq;
SET_NETDEV_DEV(net_dev, bgmac->dev); SET_NETDEV_DEV(net_dev, bgmac->dev);
dev_set_drvdata(bgmac->dev, bgmac); dev_set_drvdata(bgmac->dev, bgmac);
@ -1511,6 +1509,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
*/ */
bgmac_clk_enable(bgmac, 0); bgmac_clk_enable(bgmac, 0);
bgmac_chip_intrs_off(bgmac);
/* This seems to be fixing IRQ by assigning OOB #6 to the core */ /* This seems to be fixing IRQ by assigning OOB #6 to the core */
if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) { if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) {
if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6) if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6)

View file

@ -355,7 +355,7 @@ struct bufdesc_ex {
#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES) #define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)
#define FEC_ENET_TX_FRSIZE 2048 #define FEC_ENET_TX_FRSIZE 2048
#define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE) #define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE)
#define TX_RING_SIZE 512 /* Must be power of two */ #define TX_RING_SIZE 1024 /* Must be power of two */
#define TX_RING_MOD_MASK 511 /* for this to work */ #define TX_RING_MOD_MASK 511 /* for this to work */
#define BD_ENET_RX_INT 0x00800000 #define BD_ENET_RX_INT 0x00800000
@ -544,10 +544,23 @@ enum {
XDP_STATS_TOTAL, XDP_STATS_TOTAL,
}; };
enum fec_txbuf_type {
FEC_TXBUF_T_SKB,
FEC_TXBUF_T_XDP_NDO,
};
struct fec_tx_buffer {
union {
struct sk_buff *skb;
struct xdp_frame *xdp;
};
enum fec_txbuf_type type;
};
struct fec_enet_priv_tx_q { struct fec_enet_priv_tx_q {
struct bufdesc_prop bd; struct bufdesc_prop bd;
unsigned char *tx_bounce[TX_RING_SIZE]; unsigned char *tx_bounce[TX_RING_SIZE];
struct sk_buff *tx_skbuff[TX_RING_SIZE]; struct fec_tx_buffer tx_buf[TX_RING_SIZE];
unsigned short tx_stop_threshold; unsigned short tx_stop_threshold;
unsigned short tx_wake_threshold; unsigned short tx_wake_threshold;

View file

@ -397,7 +397,7 @@ static void fec_dump(struct net_device *ndev)
fec16_to_cpu(bdp->cbd_sc), fec16_to_cpu(bdp->cbd_sc),
fec32_to_cpu(bdp->cbd_bufaddr), fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen), fec16_to_cpu(bdp->cbd_datlen),
txq->tx_skbuff[index]); txq->tx_buf[index].skb);
bdp = fec_enet_get_nextdesc(bdp, &txq->bd); bdp = fec_enet_get_nextdesc(bdp, &txq->bd);
index++; index++;
} while (bdp != txq->bd.base); } while (bdp != txq->bd.base);
@ -654,7 +654,7 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
index = fec_enet_get_bd_index(last_bdp, &txq->bd); index = fec_enet_get_bd_index(last_bdp, &txq->bd);
/* Save skb pointer */ /* Save skb pointer */
txq->tx_skbuff[index] = skb; txq->tx_buf[index].skb = skb;
/* Make sure the updates to rest of the descriptor are performed before /* Make sure the updates to rest of the descriptor are performed before
* transferring ownership. * transferring ownership.
@ -672,9 +672,7 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
skb_tx_timestamp(skb); skb_tx_timestamp(skb);
/* Make sure the update to bdp and tx_skbuff are performed before /* Make sure the update to bdp is performed before txq->bd.cur. */
* txq->bd.cur.
*/
wmb(); wmb();
txq->bd.cur = bdp; txq->bd.cur = bdp;
@ -862,7 +860,7 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
} }
/* Save skb pointer */ /* Save skb pointer */
txq->tx_skbuff[index] = skb; txq->tx_buf[index].skb = skb;
skb_tx_timestamp(skb); skb_tx_timestamp(skb);
txq->bd.cur = bdp; txq->bd.cur = bdp;
@ -952,16 +950,33 @@ static void fec_enet_bd_init(struct net_device *dev)
for (i = 0; i < txq->bd.ring_size; i++) { for (i = 0; i < txq->bd.ring_size; i++) {
/* Initialize the BD for every fragment in the page. */ /* Initialize the BD for every fragment in the page. */
bdp->cbd_sc = cpu_to_fec16(0); bdp->cbd_sc = cpu_to_fec16(0);
if (bdp->cbd_bufaddr && if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {
!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) if (bdp->cbd_bufaddr &&
dma_unmap_single(&fep->pdev->dev, !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
fec32_to_cpu(bdp->cbd_bufaddr), dma_unmap_single(&fep->pdev->dev,
fec16_to_cpu(bdp->cbd_datlen), fec32_to_cpu(bdp->cbd_bufaddr),
DMA_TO_DEVICE); fec16_to_cpu(bdp->cbd_datlen),
if (txq->tx_skbuff[i]) { DMA_TO_DEVICE);
dev_kfree_skb_any(txq->tx_skbuff[i]); if (txq->tx_buf[i].skb) {
txq->tx_skbuff[i] = NULL; dev_kfree_skb_any(txq->tx_buf[i].skb);
txq->tx_buf[i].skb = NULL;
}
} else {
if (bdp->cbd_bufaddr)
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
if (txq->tx_buf[i].xdp) {
xdp_return_frame(txq->tx_buf[i].xdp);
txq->tx_buf[i].xdp = NULL;
}
/* restore default tx buffer type: FEC_TXBUF_T_SKB */
txq->tx_buf[i].type = FEC_TXBUF_T_SKB;
} }
bdp->cbd_bufaddr = cpu_to_fec32(0); bdp->cbd_bufaddr = cpu_to_fec32(0);
bdp = fec_enet_get_nextdesc(bdp, &txq->bd); bdp = fec_enet_get_nextdesc(bdp, &txq->bd);
} }
@ -1360,6 +1375,7 @@ static void
fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{ {
struct fec_enet_private *fep; struct fec_enet_private *fep;
struct xdp_frame *xdpf;
struct bufdesc *bdp; struct bufdesc *bdp;
unsigned short status; unsigned short status;
struct sk_buff *skb; struct sk_buff *skb;
@ -1387,16 +1403,31 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
index = fec_enet_get_bd_index(bdp, &txq->bd); index = fec_enet_get_bd_index(bdp, &txq->bd);
skb = txq->tx_skbuff[index]; if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {
txq->tx_skbuff[index] = NULL; skb = txq->tx_buf[index].skb;
if (!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr))) txq->tx_buf[index].skb = NULL;
dma_unmap_single(&fep->pdev->dev, if (bdp->cbd_bufaddr &&
fec32_to_cpu(bdp->cbd_bufaddr), !IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
fec16_to_cpu(bdp->cbd_datlen), dma_unmap_single(&fep->pdev->dev,
DMA_TO_DEVICE); fec32_to_cpu(bdp->cbd_bufaddr),
bdp->cbd_bufaddr = cpu_to_fec32(0); fec16_to_cpu(bdp->cbd_datlen),
if (!skb) DMA_TO_DEVICE);
goto skb_done; bdp->cbd_bufaddr = cpu_to_fec32(0);
if (!skb)
goto tx_buf_done;
} else {
xdpf = txq->tx_buf[index].xdp;
if (bdp->cbd_bufaddr)
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
bdp->cbd_bufaddr = cpu_to_fec32(0);
if (!xdpf) {
txq->tx_buf[index].type = FEC_TXBUF_T_SKB;
goto tx_buf_done;
}
}
/* Check for errors. */ /* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
@ -1415,21 +1446,11 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
ndev->stats.tx_carrier_errors++; ndev->stats.tx_carrier_errors++;
} else { } else {
ndev->stats.tx_packets++; ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
}
/* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB)
* are to time stamp the packet, so we still need to check time ndev->stats.tx_bytes += skb->len;
* stamping enabled flag. else
*/ ndev->stats.tx_bytes += xdpf->len;
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&
fep->hwts_tx_en) &&
fep->bufdesc_ex) {
struct skb_shared_hwtstamps shhwtstamps;
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);
skb_tstamp_tx(skb, &shhwtstamps);
} }
/* Deferred means some collisions occurred during transmit, /* Deferred means some collisions occurred during transmit,
@ -1438,10 +1459,32 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
if (status & BD_ENET_TX_DEF) if (status & BD_ENET_TX_DEF)
ndev->stats.collisions++; ndev->stats.collisions++;
/* Free the sk buffer associated with this last transmit */ if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {
dev_kfree_skb_any(skb); /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who
skb_done: * are to time stamp the packet, so we still need to check time
/* Make sure the update to bdp and tx_skbuff are performed * stamping enabled flag.
*/
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&
fep->hwts_tx_en) && fep->bufdesc_ex) {
struct skb_shared_hwtstamps shhwtstamps;
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);
skb_tstamp_tx(skb, &shhwtstamps);
}
/* Free the sk buffer associated with this last transmit */
dev_kfree_skb_any(skb);
} else {
xdp_return_frame(xdpf);
txq->tx_buf[index].xdp = NULL;
/* restore default tx buffer type: FEC_TXBUF_T_SKB */
txq->tx_buf[index].type = FEC_TXBUF_T_SKB;
}
tx_buf_done:
/* Make sure the update to bdp and tx_buf are performed
* before dirty_tx * before dirty_tx
*/ */
wmb(); wmb();
@ -3249,9 +3292,19 @@ static void fec_enet_free_buffers(struct net_device *ndev)
for (i = 0; i < txq->bd.ring_size; i++) { for (i = 0; i < txq->bd.ring_size; i++) {
kfree(txq->tx_bounce[i]); kfree(txq->tx_bounce[i]);
txq->tx_bounce[i] = NULL; txq->tx_bounce[i] = NULL;
skb = txq->tx_skbuff[i];
txq->tx_skbuff[i] = NULL; if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {
dev_kfree_skb(skb); skb = txq->tx_buf[i].skb;
txq->tx_buf[i].skb = NULL;
dev_kfree_skb(skb);
} else {
if (txq->tx_buf[i].xdp) {
xdp_return_frame(txq->tx_buf[i].xdp);
txq->tx_buf[i].xdp = NULL;
}
txq->tx_buf[i].type = FEC_TXBUF_T_SKB;
}
} }
} }
} }
@ -3296,8 +3349,7 @@ static int fec_enet_alloc_queue(struct net_device *ndev)
fep->total_tx_ring_size += fep->tx_queue[i]->bd.ring_size; fep->total_tx_ring_size += fep->tx_queue[i]->bd.ring_size;
txq->tx_stop_threshold = FEC_MAX_SKB_DESCS; txq->tx_stop_threshold = FEC_MAX_SKB_DESCS;
txq->tx_wake_threshold = txq->tx_wake_threshold = FEC_MAX_SKB_DESCS + 2 * MAX_SKB_FRAGS;
(txq->bd.ring_size - txq->tx_stop_threshold) / 2;
txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev, txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev,
txq->bd.ring_size * TSO_HEADER_SIZE, txq->bd.ring_size * TSO_HEADER_SIZE,
@ -3732,12 +3784,18 @@ static int fec_enet_bpf(struct net_device *dev, struct netdev_bpf *bpf)
if (fep->quirks & FEC_QUIRK_SWAP_FRAME) if (fep->quirks & FEC_QUIRK_SWAP_FRAME)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (!bpf->prog)
xdp_features_clear_redirect_target(dev);
if (is_run) { if (is_run) {
napi_disable(&fep->napi); napi_disable(&fep->napi);
netif_tx_disable(dev); netif_tx_disable(dev);
} }
old_prog = xchg(&fep->xdp_prog, bpf->prog); old_prog = xchg(&fep->xdp_prog, bpf->prog);
if (old_prog)
bpf_prog_put(old_prog);
fec_restart(dev); fec_restart(dev);
if (is_run) { if (is_run) {
@ -3745,8 +3803,8 @@ static int fec_enet_bpf(struct net_device *dev, struct netdev_bpf *bpf)
netif_tx_start_all_queues(dev); netif_tx_start_all_queues(dev);
} }
if (old_prog) if (bpf->prog)
bpf_prog_put(old_prog); xdp_features_set_redirect_target(dev, false);
return 0; return 0;
@ -3778,7 +3836,7 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
entries_free = fec_enet_get_free_txdesc_num(txq); entries_free = fec_enet_get_free_txdesc_num(txq);
if (entries_free < MAX_SKB_FRAGS + 1) { if (entries_free < MAX_SKB_FRAGS + 1) {
netdev_err(fep->netdev, "NOT enough BD for SG!\n"); netdev_err_once(fep->netdev, "NOT enough BD for SG!\n");
return -EBUSY; return -EBUSY;
} }
@ -3811,7 +3869,8 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
ebdp->cbd_esc = cpu_to_fec32(estatus); ebdp->cbd_esc = cpu_to_fec32(estatus);
} }
txq->tx_skbuff[index] = NULL; txq->tx_buf[index].type = FEC_TXBUF_T_XDP_NDO;
txq->tx_buf[index].xdp = frame;
/* Make sure the updates to rest of the descriptor are performed before /* Make sure the updates to rest of the descriptor are performed before
* transferring ownership. * transferring ownership.
@ -4016,8 +4075,7 @@ static int fec_enet_init(struct net_device *ndev)
if (!(fep->quirks & FEC_QUIRK_SWAP_FRAME)) if (!(fep->quirks & FEC_QUIRK_SWAP_FRAME))
ndev->xdp_features = NETDEV_XDP_ACT_BASIC | ndev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_REDIRECT;
NETDEV_XDP_ACT_NDO_XMIT;
fec_restart(ndev); fec_restart(ndev);

View file

@ -964,5 +964,6 @@ void gve_handle_report_stats(struct gve_priv *priv);
/* exported by ethtool.c */ /* exported by ethtool.c */
extern const struct ethtool_ops gve_ethtool_ops; extern const struct ethtool_ops gve_ethtool_ops;
/* needed by ethtool */ /* needed by ethtool */
extern char gve_driver_name[];
extern const char gve_version_str[]; extern const char gve_version_str[];
#endif /* _GVE_H_ */ #endif /* _GVE_H_ */

View file

@ -15,7 +15,7 @@ static void gve_get_drvinfo(struct net_device *netdev,
{ {
struct gve_priv *priv = netdev_priv(netdev); struct gve_priv *priv = netdev_priv(netdev);
strscpy(info->driver, "gve", sizeof(info->driver)); strscpy(info->driver, gve_driver_name, sizeof(info->driver));
strscpy(info->version, gve_version_str, sizeof(info->version)); strscpy(info->version, gve_version_str, sizeof(info->version));
strscpy(info->bus_info, pci_name(priv->pdev), sizeof(info->bus_info)); strscpy(info->bus_info, pci_name(priv->pdev), sizeof(info->bus_info));
} }
@ -590,6 +590,9 @@ static int gve_get_link_ksettings(struct net_device *netdev,
err = gve_adminq_report_link_speed(priv); err = gve_adminq_report_link_speed(priv);
cmd->base.speed = priv->link_speed; cmd->base.speed = priv->link_speed;
cmd->base.duplex = DUPLEX_FULL;
return err; return err;
} }

View file

@ -33,6 +33,7 @@
#define MIN_TX_TIMEOUT_GAP (1000 * 10) #define MIN_TX_TIMEOUT_GAP (1000 * 10)
#define DQO_TX_MAX 0x3FFFF #define DQO_TX_MAX 0x3FFFF
char gve_driver_name[] = "gve";
const char gve_version_str[] = GVE_VERSION; const char gve_version_str[] = GVE_VERSION;
static const char gve_version_prefix[] = GVE_VERSION_PREFIX; static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
@ -2200,7 +2201,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err) if (err)
return err; return err;
err = pci_request_regions(pdev, "gvnic-cfg"); err = pci_request_regions(pdev, gve_driver_name);
if (err) if (err)
goto abort_with_enabled; goto abort_with_enabled;
@ -2393,8 +2394,8 @@ static const struct pci_device_id gve_id_table[] = {
{ } { }
}; };
static struct pci_driver gvnic_driver = { static struct pci_driver gve_driver = {
.name = "gvnic", .name = gve_driver_name,
.id_table = gve_id_table, .id_table = gve_id_table,
.probe = gve_probe, .probe = gve_probe,
.remove = gve_remove, .remove = gve_remove,
@ -2405,10 +2406,10 @@ static struct pci_driver gvnic_driver = {
#endif #endif
}; };
module_pci_driver(gvnic_driver); module_pci_driver(gve_driver);
MODULE_DEVICE_TABLE(pci, gve_id_table); MODULE_DEVICE_TABLE(pci, gve_id_table);
MODULE_AUTHOR("Google, Inc."); MODULE_AUTHOR("Google, Inc.");
MODULE_DESCRIPTION("gVNIC Driver"); MODULE_DESCRIPTION("Google Virtual NIC Driver");
MODULE_LICENSE("Dual MIT/GPL"); MODULE_LICENSE("Dual MIT/GPL");
MODULE_VERSION(GVE_VERSION); MODULE_VERSION(GVE_VERSION);

View file

@ -5739,6 +5739,13 @@ ice_set_tx_maxrate(struct net_device *netdev, int queue_index, u32 maxrate)
q_handle = vsi->tx_rings[queue_index]->q_handle; q_handle = vsi->tx_rings[queue_index]->q_handle;
tc = ice_dcb_get_tc(vsi, queue_index); tc = ice_dcb_get_tc(vsi, queue_index);
vsi = ice_locate_vsi_using_queue(vsi, queue_index);
if (!vsi) {
netdev_err(netdev, "Invalid VSI for given queue %d\n",
queue_index);
return -EINVAL;
}
/* Set BW back to default, when user set maxrate to 0 */ /* Set BW back to default, when user set maxrate to 0 */
if (!maxrate) if (!maxrate)
status = ice_cfg_q_bw_dflt_lmt(vsi->port_info, vsi->idx, tc, status = ice_cfg_q_bw_dflt_lmt(vsi->port_info, vsi->idx, tc,
@ -7872,10 +7879,10 @@ static int
ice_validate_mqprio_qopt(struct ice_vsi *vsi, ice_validate_mqprio_qopt(struct ice_vsi *vsi,
struct tc_mqprio_qopt_offload *mqprio_qopt) struct tc_mqprio_qopt_offload *mqprio_qopt)
{ {
u64 sum_max_rate = 0, sum_min_rate = 0;
int non_power_of_2_qcount = 0; int non_power_of_2_qcount = 0;
struct ice_pf *pf = vsi->back; struct ice_pf *pf = vsi->back;
int max_rss_q_cnt = 0; int max_rss_q_cnt = 0;
u64 sum_min_rate = 0;
struct device *dev; struct device *dev;
int i, speed; int i, speed;
u8 num_tc; u8 num_tc;
@ -7891,6 +7898,7 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
dev = ice_pf_to_dev(pf); dev = ice_pf_to_dev(pf);
vsi->ch_rss_size = 0; vsi->ch_rss_size = 0;
num_tc = mqprio_qopt->qopt.num_tc; num_tc = mqprio_qopt->qopt.num_tc;
speed = ice_get_link_speed_kbps(vsi);
for (i = 0; num_tc; i++) { for (i = 0; num_tc; i++) {
int qcount = mqprio_qopt->qopt.count[i]; int qcount = mqprio_qopt->qopt.count[i];
@ -7931,7 +7939,6 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
*/ */
max_rate = mqprio_qopt->max_rate[i]; max_rate = mqprio_qopt->max_rate[i];
max_rate = div_u64(max_rate, ICE_BW_KBPS_DIVISOR); max_rate = div_u64(max_rate, ICE_BW_KBPS_DIVISOR);
sum_max_rate += max_rate;
/* min_rate is minimum guaranteed rate and it can't be zero */ /* min_rate is minimum guaranteed rate and it can't be zero */
min_rate = mqprio_qopt->min_rate[i]; min_rate = mqprio_qopt->min_rate[i];
@ -7944,6 +7951,12 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
return -EINVAL; return -EINVAL;
} }
if (max_rate && max_rate > speed) {
dev_err(dev, "TC%d: max_rate(%llu Kbps) > link speed of %u Kbps\n",
i, max_rate, speed);
return -EINVAL;
}
iter_div_u64_rem(min_rate, ICE_MIN_BW_LIMIT, &rem); iter_div_u64_rem(min_rate, ICE_MIN_BW_LIMIT, &rem);
if (rem) { if (rem) {
dev_err(dev, "TC%d: Min Rate not multiple of %u Kbps", dev_err(dev, "TC%d: Min Rate not multiple of %u Kbps",
@ -7981,12 +7994,6 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
(mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i])) (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i]))
return -EINVAL; return -EINVAL;
speed = ice_get_link_speed_kbps(vsi);
if (sum_max_rate && sum_max_rate > (u64)speed) {
dev_err(dev, "Invalid max Tx rate(%llu) Kbps > speed(%u) Kbps specified\n",
sum_max_rate, speed);
return -EINVAL;
}
if (sum_min_rate && sum_min_rate > (u64)speed) { if (sum_min_rate && sum_min_rate > (u64)speed) {
dev_err(dev, "Invalid min Tx rate(%llu) Kbps > speed (%u) Kbps specified\n", dev_err(dev, "Invalid min Tx rate(%llu) Kbps > speed (%u) Kbps specified\n",
sum_min_rate, speed); sum_min_rate, speed);

View file

@ -750,17 +750,16 @@ exit:
/** /**
* ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action) * ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action)
* @vsi: Pointer to VSI * @vsi: Pointer to VSI
* @tc_fltr: Pointer to tc_flower_filter * @queue: Queue index
* *
* Locate the VSI using specified queue. When ADQ is not enabled, always * Locate the VSI using specified "queue". When ADQ is not enabled,
* return input VSI, otherwise locate corresponding VSI based on per channel * always return input VSI, otherwise locate corresponding
* offset and qcount * VSI based on per channel "offset" and "qcount"
*/ */
static struct ice_vsi * struct ice_vsi *
ice_locate_vsi_using_queue(struct ice_vsi *vsi, ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue)
struct ice_tc_flower_fltr *tc_fltr)
{ {
int num_tc, tc, queue; int num_tc, tc;
/* if ADQ is not active, passed VSI is the candidate VSI */ /* if ADQ is not active, passed VSI is the candidate VSI */
if (!ice_is_adq_active(vsi->back)) if (!ice_is_adq_active(vsi->back))
@ -770,7 +769,6 @@ ice_locate_vsi_using_queue(struct ice_vsi *vsi,
* upon queue number) * upon queue number)
*/ */
num_tc = vsi->mqprio_qopt.qopt.num_tc; num_tc = vsi->mqprio_qopt.qopt.num_tc;
queue = tc_fltr->action.fwd.q.queue;
for (tc = 0; tc < num_tc; tc++) { for (tc = 0; tc < num_tc; tc++) {
int qcount = vsi->mqprio_qopt.qopt.count[tc]; int qcount = vsi->mqprio_qopt.qopt.count[tc];
@ -812,6 +810,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
struct ice_pf *pf = vsi->back; struct ice_pf *pf = vsi->back;
struct device *dev; struct device *dev;
u32 tc_class; u32 tc_class;
int q;
dev = ice_pf_to_dev(pf); dev = ice_pf_to_dev(pf);
@ -840,7 +839,8 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
/* Determine destination VSI even though the action is /* Determine destination VSI even though the action is
* FWD_TO_QUEUE, because QUEUE is associated with VSI * FWD_TO_QUEUE, because QUEUE is associated with VSI
*/ */
dest_vsi = tc_fltr->dest_vsi; q = tc_fltr->action.fwd.q.queue;
dest_vsi = ice_locate_vsi_using_queue(vsi, q);
break; break;
default: default:
dev_err(dev, dev_err(dev,
@ -1716,7 +1716,7 @@ ice_tc_forward_to_queue(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
/* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare /* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare
* ADQ switch filter * ADQ switch filter
*/ */
ch_vsi = ice_locate_vsi_using_queue(vsi, fltr); ch_vsi = ice_locate_vsi_using_queue(vsi, fltr->action.fwd.q.queue);
if (!ch_vsi) if (!ch_vsi)
return -EINVAL; return -EINVAL;
fltr->dest_vsi = ch_vsi; fltr->dest_vsi = ch_vsi;

View file

@ -204,6 +204,7 @@ static inline int ice_chnl_dmac_fltr_cnt(struct ice_pf *pf)
return pf->num_dmac_chnl_fltrs; return pf->num_dmac_chnl_fltrs;
} }
struct ice_vsi *ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue);
int int
ice_add_cls_flower(struct net_device *netdev, struct ice_vsi *vsi, ice_add_cls_flower(struct net_device *netdev, struct ice_vsi *vsi,
struct flow_cls_offload *cls_flower); struct flow_cls_offload *cls_flower);

View file

@ -14,6 +14,7 @@
#include <linux/timecounter.h> #include <linux/timecounter.h>
#include <linux/net_tstamp.h> #include <linux/net_tstamp.h>
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/hrtimer.h>
#include "igc_hw.h" #include "igc_hw.h"
@ -101,6 +102,8 @@ struct igc_ring {
u32 start_time; u32 start_time;
u32 end_time; u32 end_time;
u32 max_sdu; u32 max_sdu;
bool oper_gate_closed; /* Operating gate. True if the TX Queue is closed */
bool admin_gate_closed; /* Future gate. True if the TX Queue will be closed */
/* CBS parameters */ /* CBS parameters */
bool cbs_enable; /* indicates if CBS is enabled */ bool cbs_enable; /* indicates if CBS is enabled */
@ -160,6 +163,7 @@ struct igc_adapter {
struct timer_list watchdog_timer; struct timer_list watchdog_timer;
struct timer_list dma_err_timer; struct timer_list dma_err_timer;
struct timer_list phy_info_timer; struct timer_list phy_info_timer;
struct hrtimer hrtimer;
u32 wol; u32 wol;
u32 en_mng_pt; u32 en_mng_pt;
@ -184,10 +188,13 @@ struct igc_adapter {
u32 max_frame_size; u32 max_frame_size;
u32 min_frame_size; u32 min_frame_size;
int tc_setup_type;
ktime_t base_time; ktime_t base_time;
ktime_t cycle_time; ktime_t cycle_time;
bool qbv_enable; bool taprio_offload_enable;
u32 qbv_config_change_errors; u32 qbv_config_change_errors;
bool qbv_transition;
unsigned int qbv_count;
/* OS defined structs */ /* OS defined structs */
struct pci_dev *pdev; struct pci_dev *pdev;

View file

@ -1708,6 +1708,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
/* twisted pair */ /* twisted pair */
cmd->base.port = PORT_TP; cmd->base.port = PORT_TP;
cmd->base.phy_address = hw->phy.addr; cmd->base.phy_address = hw->phy.addr;
ethtool_link_ksettings_add_link_mode(cmd, supported, TP);
ethtool_link_ksettings_add_link_mode(cmd, advertising, TP);
/* advertising link modes */ /* advertising link modes */
if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF) if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)

View file

@ -711,7 +711,6 @@ static void igc_configure_tx_ring(struct igc_adapter *adapter,
/* disable the queue */ /* disable the queue */
wr32(IGC_TXDCTL(reg_idx), 0); wr32(IGC_TXDCTL(reg_idx), 0);
wrfl(); wrfl();
mdelay(10);
wr32(IGC_TDLEN(reg_idx), wr32(IGC_TDLEN(reg_idx),
ring->count * sizeof(union igc_adv_tx_desc)); ring->count * sizeof(union igc_adv_tx_desc));
@ -1017,7 +1016,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
ktime_t base_time = adapter->base_time; ktime_t base_time = adapter->base_time;
ktime_t now = ktime_get_clocktai(); ktime_t now = ktime_get_clocktai();
ktime_t baset_est, end_of_cycle; ktime_t baset_est, end_of_cycle;
u32 launchtime; s32 launchtime;
s64 n; s64 n;
n = div64_s64(ktime_sub_ns(now, base_time), cycle_time); n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);
@ -1030,7 +1029,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
*first_flag = true; *first_flag = true;
ring->last_ff_cycle = baset_est; ring->last_ff_cycle = baset_est;
if (ktime_compare(txtime, ring->last_tx_cycle) > 0) if (ktime_compare(end_of_cycle, ring->last_tx_cycle) > 0)
*insert_empty = true; *insert_empty = true;
} }
} }
@ -1573,16 +1572,12 @@ done:
first->bytecount = skb->len; first->bytecount = skb->len;
first->gso_segs = 1; first->gso_segs = 1;
if (tx_ring->max_sdu > 0) { if (adapter->qbv_transition || tx_ring->oper_gate_closed)
u32 max_sdu = 0; goto out_drop;
max_sdu = tx_ring->max_sdu + if (tx_ring->max_sdu > 0 && first->bytecount > tx_ring->max_sdu) {
(skb_vlan_tagged(first->skb) ? VLAN_HLEN : 0); adapter->stats.txdrop++;
goto out_drop;
if (first->bytecount > max_sdu) {
adapter->stats.txdrop++;
goto out_drop;
}
} }
if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) && if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) &&
@ -3012,8 +3007,8 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget)
time_after(jiffies, tx_buffer->time_stamp + time_after(jiffies, tx_buffer->time_stamp +
(adapter->tx_timeout_factor * HZ)) && (adapter->tx_timeout_factor * HZ)) &&
!(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) && !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) &&
(rd32(IGC_TDH(tx_ring->reg_idx)) != (rd32(IGC_TDH(tx_ring->reg_idx)) != readl(tx_ring->tail)) &&
readl(tx_ring->tail))) { !tx_ring->oper_gate_closed) {
/* detected Tx unit hang */ /* detected Tx unit hang */
netdev_err(tx_ring->netdev, netdev_err(tx_ring->netdev,
"Detected Tx Unit Hang\n" "Detected Tx Unit Hang\n"
@ -6102,7 +6097,10 @@ static int igc_tsn_clear_schedule(struct igc_adapter *adapter)
adapter->base_time = 0; adapter->base_time = 0;
adapter->cycle_time = NSEC_PER_SEC; adapter->cycle_time = NSEC_PER_SEC;
adapter->taprio_offload_enable = false;
adapter->qbv_config_change_errors = 0; adapter->qbv_config_change_errors = 0;
adapter->qbv_transition = false;
adapter->qbv_count = 0;
for (i = 0; i < adapter->num_tx_queues; i++) { for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i]; struct igc_ring *ring = adapter->tx_ring[i];
@ -6110,6 +6108,8 @@ static int igc_tsn_clear_schedule(struct igc_adapter *adapter)
ring->start_time = 0; ring->start_time = 0;
ring->end_time = NSEC_PER_SEC; ring->end_time = NSEC_PER_SEC;
ring->max_sdu = 0; ring->max_sdu = 0;
ring->oper_gate_closed = false;
ring->admin_gate_closed = false;
} }
return 0; return 0;
@ -6121,27 +6121,20 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
bool queue_configured[IGC_MAX_TX_QUEUES] = { }; bool queue_configured[IGC_MAX_TX_QUEUES] = { };
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
u32 start_time = 0, end_time = 0; u32 start_time = 0, end_time = 0;
struct timespec64 now;
size_t n; size_t n;
int i; int i;
switch (qopt->cmd) { if (qopt->cmd == TAPRIO_CMD_DESTROY)
case TAPRIO_CMD_REPLACE:
adapter->qbv_enable = true;
break;
case TAPRIO_CMD_DESTROY:
adapter->qbv_enable = false;
break;
default:
return -EOPNOTSUPP;
}
if (!adapter->qbv_enable)
return igc_tsn_clear_schedule(adapter); return igc_tsn_clear_schedule(adapter);
if (qopt->cmd != TAPRIO_CMD_REPLACE)
return -EOPNOTSUPP;
if (qopt->base_time < 0) if (qopt->base_time < 0)
return -ERANGE; return -ERANGE;
if (igc_is_device_id_i225(hw) && adapter->base_time) if (igc_is_device_id_i225(hw) && adapter->taprio_offload_enable)
return -EALREADY; return -EALREADY;
if (!validate_schedule(adapter, qopt)) if (!validate_schedule(adapter, qopt))
@ -6149,6 +6142,9 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
adapter->cycle_time = qopt->cycle_time; adapter->cycle_time = qopt->cycle_time;
adapter->base_time = qopt->base_time; adapter->base_time = qopt->base_time;
adapter->taprio_offload_enable = true;
igc_ptp_read(adapter, &now);
for (n = 0; n < qopt->num_entries; n++) { for (n = 0; n < qopt->num_entries; n++) {
struct tc_taprio_sched_entry *e = &qopt->entries[n]; struct tc_taprio_sched_entry *e = &qopt->entries[n];
@ -6184,7 +6180,10 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
ring->start_time = start_time; ring->start_time = start_time;
ring->end_time = end_time; ring->end_time = end_time;
queue_configured[i] = true; if (ring->start_time >= adapter->cycle_time)
queue_configured[i] = false;
else
queue_configured[i] = true;
} }
start_time += e->interval; start_time += e->interval;
@ -6194,8 +6193,20 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
* If not, set the start and end time to be end time. * If not, set the start and end time to be end time.
*/ */
for (i = 0; i < adapter->num_tx_queues; i++) { for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i];
if (!is_base_time_past(qopt->base_time, &now)) {
ring->admin_gate_closed = false;
} else {
ring->oper_gate_closed = false;
ring->admin_gate_closed = false;
}
if (!queue_configured[i]) { if (!queue_configured[i]) {
struct igc_ring *ring = adapter->tx_ring[i]; if (!is_base_time_past(qopt->base_time, &now))
ring->admin_gate_closed = true;
else
ring->oper_gate_closed = true;
ring->start_time = end_time; ring->start_time = end_time;
ring->end_time = end_time; ring->end_time = end_time;
@ -6207,7 +6218,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
struct net_device *dev = adapter->netdev; struct net_device *dev = adapter->netdev;
if (qopt->max_sdu[i]) if (qopt->max_sdu[i])
ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len; ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len - ETH_TLEN;
else else
ring->max_sdu = 0; ring->max_sdu = 0;
} }
@ -6327,6 +6338,8 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
{ {
struct igc_adapter *adapter = netdev_priv(dev); struct igc_adapter *adapter = netdev_priv(dev);
adapter->tc_setup_type = type;
switch (type) { switch (type) {
case TC_QUERY_CAPS: case TC_QUERY_CAPS:
return igc_tc_query_caps(adapter, type_data); return igc_tc_query_caps(adapter, type_data);
@ -6574,6 +6587,27 @@ static const struct xdp_metadata_ops igc_xdp_metadata_ops = {
.xmo_rx_timestamp = igc_xdp_rx_timestamp, .xmo_rx_timestamp = igc_xdp_rx_timestamp,
}; };
static enum hrtimer_restart igc_qbv_scheduling_timer(struct hrtimer *timer)
{
struct igc_adapter *adapter = container_of(timer, struct igc_adapter,
hrtimer);
unsigned int i;
adapter->qbv_transition = true;
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *tx_ring = adapter->tx_ring[i];
if (tx_ring->admin_gate_closed) {
tx_ring->admin_gate_closed = false;
tx_ring->oper_gate_closed = true;
} else {
tx_ring->oper_gate_closed = false;
}
}
adapter->qbv_transition = false;
return HRTIMER_NORESTART;
}
/** /**
* igc_probe - Device Initialization Routine * igc_probe - Device Initialization Routine
* @pdev: PCI device information struct * @pdev: PCI device information struct
@ -6752,6 +6786,9 @@ static int igc_probe(struct pci_dev *pdev,
INIT_WORK(&adapter->reset_task, igc_reset_task); INIT_WORK(&adapter->reset_task, igc_reset_task);
INIT_WORK(&adapter->watchdog_task, igc_watchdog_task); INIT_WORK(&adapter->watchdog_task, igc_watchdog_task);
hrtimer_init(&adapter->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
adapter->hrtimer.function = &igc_qbv_scheduling_timer;
/* Initialize link properties that are user-changeable */ /* Initialize link properties that are user-changeable */
adapter->fc_autoneg = true; adapter->fc_autoneg = true;
hw->mac.autoneg = true; hw->mac.autoneg = true;
@ -6855,6 +6892,7 @@ static void igc_remove(struct pci_dev *pdev)
cancel_work_sync(&adapter->reset_task); cancel_work_sync(&adapter->reset_task);
cancel_work_sync(&adapter->watchdog_task); cancel_work_sync(&adapter->watchdog_task);
hrtimer_cancel(&adapter->hrtimer);
/* Release control of h/w to f/w. If f/w is AMT enabled, this /* Release control of h/w to f/w. If f/w is AMT enabled, this
* would have already happened in close and is redundant. * would have already happened in close and is redundant.

View file

@ -356,16 +356,35 @@ static int igc_ptp_feature_enable_i225(struct ptp_clock_info *ptp,
tsim &= ~IGC_TSICR_TT0; tsim &= ~IGC_TSICR_TT0;
} }
if (on) { if (on) {
struct timespec64 safe_start;
int i = rq->perout.index; int i = rq->perout.index;
igc_pin_perout(igc, i, pin, use_freq); igc_pin_perout(igc, i, pin, use_freq);
igc->perout[i].start.tv_sec = rq->perout.start.sec; igc_ptp_read(igc, &safe_start);
/* PPS output start time is triggered by Target time(TT)
* register. Programming any past time value into TT
* register will cause PPS to never start. Need to make
* sure we program the TT register a time ahead in
* future. There isn't a stringent need to fire PPS out
* right away. Adding +2 seconds should take care of
* corner cases. Let's say if the SYSTIML is close to
* wrap up and the timer keeps ticking as we program the
* register, adding +2seconds is safe bet.
*/
safe_start.tv_sec += 2;
if (rq->perout.start.sec < safe_start.tv_sec)
igc->perout[i].start.tv_sec = safe_start.tv_sec;
else
igc->perout[i].start.tv_sec = rq->perout.start.sec;
igc->perout[i].start.tv_nsec = rq->perout.start.nsec; igc->perout[i].start.tv_nsec = rq->perout.start.nsec;
igc->perout[i].period.tv_sec = ts.tv_sec; igc->perout[i].period.tv_sec = ts.tv_sec;
igc->perout[i].period.tv_nsec = ts.tv_nsec; igc->perout[i].period.tv_nsec = ts.tv_nsec;
wr32(trgttimh, rq->perout.start.sec); wr32(trgttimh, (u32)igc->perout[i].start.tv_sec);
/* For now, always select timer 0 as source. */ /* For now, always select timer 0 as source. */
wr32(trgttiml, rq->perout.start.nsec | IGC_TT_IO_TIMER_SEL_SYSTIM0); wr32(trgttiml, (u32)(igc->perout[i].start.tv_nsec |
IGC_TT_IO_TIMER_SEL_SYSTIM0));
if (use_freq) if (use_freq)
wr32(freqout, ns); wr32(freqout, ns);
tsauxc |= tsauxc_mask; tsauxc |= tsauxc_mask;

View file

@ -37,7 +37,7 @@ static unsigned int igc_tsn_new_flags(struct igc_adapter *adapter)
{ {
unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED; unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED;
if (adapter->qbv_enable) if (adapter->taprio_offload_enable)
new_flags |= IGC_FLAG_TSN_QBV_ENABLED; new_flags |= IGC_FLAG_TSN_QBV_ENABLED;
if (is_any_launchtime(adapter)) if (is_any_launchtime(adapter))
@ -114,7 +114,6 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
static int igc_tsn_enable_offload(struct igc_adapter *adapter) static int igc_tsn_enable_offload(struct igc_adapter *adapter)
{ {
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
bool tsn_mode_reconfig = false;
u32 tqavctrl, baset_l, baset_h; u32 tqavctrl, baset_l, baset_h;
u32 sec, nsec, cycle; u32 sec, nsec, cycle;
ktime_t base_time, systim; ktime_t base_time, systim;
@ -133,8 +132,28 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
wr32(IGC_STQT(i), ring->start_time); wr32(IGC_STQT(i), ring->start_time);
wr32(IGC_ENDQT(i), ring->end_time); wr32(IGC_ENDQT(i), ring->end_time);
txqctl |= IGC_TXQCTL_STRICT_CYCLE | if (adapter->taprio_offload_enable) {
IGC_TXQCTL_STRICT_END; /* If taprio_offload_enable is set we are in "taprio"
* mode and we need to be strict about the
* cycles: only transmit a packet if it can be
* completed during that cycle.
*
* If taprio_offload_enable is NOT true when
* enabling TSN offload, the cycle should have
* no external effects, but is only used internally
* to adapt the base time register after a second
* has passed.
*
* Enabling strict mode in this case would
* unnecessarily prevent the transmission of
* certain packets (i.e. at the boundary of a
* second) and thus interfere with the launchtime
* feature that promises transmission at a
* certain point in time.
*/
txqctl |= IGC_TXQCTL_STRICT_CYCLE |
IGC_TXQCTL_STRICT_END;
}
if (ring->launchtime_enable) if (ring->launchtime_enable)
txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT; txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT;
@ -228,11 +247,10 @@ skip_cbs:
tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS; tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;
if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN)
tsn_mode_reconfig = true;
tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
adapter->qbv_count++;
cycle = adapter->cycle_time; cycle = adapter->cycle_time;
base_time = adapter->base_time; base_time = adapter->base_time;
@ -249,17 +267,29 @@ skip_cbs:
* Gate Control List (GCL) is running. * Gate Control List (GCL) is running.
*/ */
if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) && if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
tsn_mode_reconfig) (adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) &&
(adapter->qbv_count > 1))
adapter->qbv_config_change_errors++; adapter->qbv_config_change_errors++;
} else { } else {
/* According to datasheet section 7.5.2.9.3.3, FutScdDis bit if (igc_is_device_id_i226(hw)) {
* has to be configured before the cycle time and base time. ktime_t adjust_time, expires_time;
* Tx won't hang if there is a GCL is already running,
* so in this case we don't need to set FutScdDis. /* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
*/ * has to be configured before the cycle time and base time.
if (igc_is_device_id_i226(hw) && * Tx won't hang if a GCL is already running,
!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L))) * so in this case we don't need to set FutScdDis.
tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS; */
if (!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))
tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;
nsec = rd32(IGC_SYSTIML);
sec = rd32(IGC_SYSTIMH);
systim = ktime_set(sec, nsec);
adjust_time = adapter->base_time;
expires_time = ktime_sub_ns(adjust_time, systim);
hrtimer_start(&adapter->hrtimer, expires_time, HRTIMER_MODE_REL);
}
} }
wr32(IGC_TQAVCTRL, tqavctrl); wr32(IGC_TQAVCTRL, tqavctrl);
@ -305,7 +335,11 @@ int igc_tsn_offload_apply(struct igc_adapter *adapter)
{ {
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) { /* Per I225/6 HW Design Section 7.5.2.1, transmit mode
* cannot be changed dynamically. Require reset the adapter.
*/
if (netif_running(adapter->netdev) &&
(igc_is_device_id_i225(hw) || !adapter->qbv_count)) {
schedule_work(&adapter->reset_task); schedule_work(&adapter->reset_task);
return 0; return 0;
} }

View file

@ -1511,7 +1511,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
*/ */
if (txq_number == 1) if (txq_number == 1)
txq_map = (cpu == pp->rxq_def) ? txq_map = (cpu == pp->rxq_def) ?
MVNETA_CPU_TXQ_ACCESS(1) : 0; MVNETA_CPU_TXQ_ACCESS(0) : 0;
} else { } else {
txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK; txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
@ -4356,7 +4356,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
*/ */
if (txq_number == 1) if (txq_number == 1)
txq_map = (cpu == elected_cpu) ? txq_map = (cpu == elected_cpu) ?
MVNETA_CPU_TXQ_ACCESS(1) : 0; MVNETA_CPU_TXQ_ACCESS(0) : 0;
else else
txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) & txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) &
MVNETA_CPU_TXQ_ACCESS_ALL_MASK; MVNETA_CPU_TXQ_ACCESS_ALL_MASK;

View file

@ -208,7 +208,7 @@ struct ptp *ptp_get(void)
/* Check driver is bound to PTP block */ /* Check driver is bound to PTP block */
if (!ptp) if (!ptp)
ptp = ERR_PTR(-EPROBE_DEFER); ptp = ERR_PTR(-EPROBE_DEFER);
else else if (!IS_ERR(ptp))
pci_dev_get(ptp->pdev); pci_dev_get(ptp->pdev);
return ptp; return ptp;
@ -388,11 +388,10 @@ static int ptp_extts_on(struct ptp *ptp, int on)
static int ptp_probe(struct pci_dev *pdev, static int ptp_probe(struct pci_dev *pdev,
const struct pci_device_id *ent) const struct pci_device_id *ent)
{ {
struct device *dev = &pdev->dev;
struct ptp *ptp; struct ptp *ptp;
int err; int err;
ptp = devm_kzalloc(dev, sizeof(*ptp), GFP_KERNEL); ptp = kzalloc(sizeof(*ptp), GFP_KERNEL);
if (!ptp) { if (!ptp) {
err = -ENOMEM; err = -ENOMEM;
goto error; goto error;
@ -428,20 +427,19 @@ static int ptp_probe(struct pci_dev *pdev,
return 0; return 0;
error_free: error_free:
devm_kfree(dev, ptp); kfree(ptp);
error: error:
/* For `ptp_get()` we need to differentiate between the case /* For `ptp_get()` we need to differentiate between the case
* when the core has not tried to probe this device and the case when * when the core has not tried to probe this device and the case when
* the probe failed. In the later case we pretend that the * the probe failed. In the later case we keep the error in
* initialization was successful and keep the error in
* `dev->driver_data`. * `dev->driver_data`.
*/ */
pci_set_drvdata(pdev, ERR_PTR(err)); pci_set_drvdata(pdev, ERR_PTR(err));
if (!first_ptp_block) if (!first_ptp_block)
first_ptp_block = ERR_PTR(err); first_ptp_block = ERR_PTR(err);
return 0; return err;
} }
static void ptp_remove(struct pci_dev *pdev) static void ptp_remove(struct pci_dev *pdev)
@ -449,16 +447,17 @@ static void ptp_remove(struct pci_dev *pdev)
struct ptp *ptp = pci_get_drvdata(pdev); struct ptp *ptp = pci_get_drvdata(pdev);
u64 clock_cfg; u64 clock_cfg;
if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))
hrtimer_cancel(&ptp->hrtimer);
if (IS_ERR_OR_NULL(ptp)) if (IS_ERR_OR_NULL(ptp))
return; return;
if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))
hrtimer_cancel(&ptp->hrtimer);
/* Disable PTP clock */ /* Disable PTP clock */
clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG); clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG);
clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN; clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN;
writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG); writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
kfree(ptp);
} }
static const struct pci_device_id ptp_id_table[] = { static const struct pci_device_id ptp_id_table[] = {

View file

@ -3252,7 +3252,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
rvu->ptp = ptp_get(); rvu->ptp = ptp_get();
if (IS_ERR(rvu->ptp)) { if (IS_ERR(rvu->ptp)) {
err = PTR_ERR(rvu->ptp); err = PTR_ERR(rvu->ptp);
if (err == -EPROBE_DEFER) if (err)
goto err_release_regions; goto err_release_regions;
rvu->ptp = NULL; rvu->ptp = NULL;
} }

View file

@ -4069,21 +4069,14 @@ int rvu_mbox_handler_nix_set_rx_mode(struct rvu *rvu, struct nix_rx_mode *req,
} }
/* install/uninstall promisc entry */ /* install/uninstall promisc entry */
if (promisc) { if (promisc)
rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf, rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
pfvf->rx_chan_base, pfvf->rx_chan_base,
pfvf->rx_chan_cnt); pfvf->rx_chan_cnt);
else
if (rvu_npc_exact_has_match_table(rvu))
rvu_npc_exact_promisc_enable(rvu, pcifunc);
} else {
if (!nix_rx_multicast) if (!nix_rx_multicast)
rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf, false); rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf, false);
if (rvu_npc_exact_has_match_table(rvu))
rvu_npc_exact_promisc_disable(rvu, pcifunc);
}
return 0; return 0;
} }

View file

@ -1164,8 +1164,10 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
{ {
struct npc_exact_table *table; struct npc_exact_table *table;
u16 *cnt, old_cnt; u16 *cnt, old_cnt;
bool promisc;
table = rvu->hw->table; table = rvu->hw->table;
promisc = table->promisc_mode[drop_mcam_idx];
cnt = &table->cnt_cmd_rules[drop_mcam_idx]; cnt = &table->cnt_cmd_rules[drop_mcam_idx];
old_cnt = *cnt; old_cnt = *cnt;
@ -1177,13 +1179,18 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
*enable_or_disable_cam = false; *enable_or_disable_cam = false;
/* If all rules are deleted, disable cam */ if (promisc)
goto done;
/* If all rules are deleted and not already in promisc mode;
* disable cam
*/
if (!*cnt && val < 0) { if (!*cnt && val < 0) {
*enable_or_disable_cam = true; *enable_or_disable_cam = true;
goto done; goto done;
} }
/* If rule got added, enable cam */ /* If rule got added and not already in promisc mode; enable cam */
if (!old_cnt && val > 0) { if (!old_cnt && val > 0) {
*enable_or_disable_cam = true; *enable_or_disable_cam = true;
goto done; goto done;
@ -1462,6 +1469,12 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
*promisc = false; *promisc = false;
mutex_unlock(&table->lock); mutex_unlock(&table->lock);
/* Enable drop rule */
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX,
true);
dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d)\n",
__func__, cgx_id, lmac_id);
return 0; return 0;
} }
@ -1503,6 +1516,12 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
*promisc = true; *promisc = true;
mutex_unlock(&table->lock); mutex_unlock(&table->lock);
/* disable drop rule */
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX,
false);
dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d)\n",
__func__, cgx_id, lmac_id);
return 0; return 0;
} }

View file

@ -872,6 +872,14 @@ static int otx2_prepare_flow_request(struct ethtool_rx_flow_spec *fsp,
return -EINVAL; return -EINVAL;
vlan_etype = be16_to_cpu(fsp->h_ext.vlan_etype); vlan_etype = be16_to_cpu(fsp->h_ext.vlan_etype);
/* Drop rule with vlan_etype == 802.1Q
* and vlan_id == 0 is not supported
*/
if (vlan_etype == ETH_P_8021Q && !fsp->m_ext.vlan_tci &&
fsp->ring_cookie == RX_CLS_FLOW_DISC)
return -EINVAL;
/* Only ETH_P_8021Q and ETH_P_802AD types supported */ /* Only ETH_P_8021Q and ETH_P_802AD types supported */
if (vlan_etype != ETH_P_8021Q && if (vlan_etype != ETH_P_8021Q &&
vlan_etype != ETH_P_8021AD) vlan_etype != ETH_P_8021AD)

View file

@ -597,6 +597,21 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
if (!match.mask->vlan_id) {
struct flow_action_entry *act;
int i;
flow_action_for_each(i, act, &rule->action) {
if (act->id == FLOW_ACTION_DROP) {
netdev_err(nic->netdev,
"vlan tpid 0x%x with vlan_id %d is not supported for DROP rule.\n",
ntohs(match.key->vlan_tpid),
match.key->vlan_id);
return -EOPNOTSUPP;
}
}
}
if (match.mask->vlan_id || if (match.mask->vlan_id ||
match.mask->vlan_dei || match.mask->vlan_dei ||
match.mask->vlan_priority) { match.mask->vlan_priority) {

View file

@ -594,7 +594,7 @@ int mlx5e_fs_tt_redirect_any_create(struct mlx5e_flow_steering *fs)
err = fs_any_create_table(fs); err = fs_any_create_table(fs);
if (err) if (err)
return err; goto err_free_any;
err = fs_any_enable(fs); err = fs_any_enable(fs);
if (err) if (err)
@ -606,8 +606,8 @@ int mlx5e_fs_tt_redirect_any_create(struct mlx5e_flow_steering *fs)
err_destroy_table: err_destroy_table:
fs_any_destroy_table(fs_any); fs_any_destroy_table(fs_any);
err_free_any:
kfree(fs_any);
mlx5e_fs_set_any(fs, NULL); mlx5e_fs_set_any(fs, NULL);
kfree(fs_any);
return err; return err;
} }

View file

@ -729,8 +729,10 @@ int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params,
c = kvzalloc_node(sizeof(*c), GFP_KERNEL, dev_to_node(mlx5_core_dma_dev(mdev))); c = kvzalloc_node(sizeof(*c), GFP_KERNEL, dev_to_node(mlx5_core_dma_dev(mdev)));
cparams = kvzalloc(sizeof(*cparams), GFP_KERNEL); cparams = kvzalloc(sizeof(*cparams), GFP_KERNEL);
if (!c || !cparams) if (!c || !cparams) {
return -ENOMEM; err = -ENOMEM;
goto err_free;
}
c->priv = priv; c->priv = priv;
c->mdev = priv->mdev; c->mdev = priv->mdev;

View file

@ -1545,7 +1545,8 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */ attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */
attr->ct_attr.zone = act->ct.zone; attr->ct_attr.zone = act->ct.zone;
attr->ct_attr.nf_ft = act->ct.flow_table; if (!(act->ct.action & TCA_CT_ACT_CLEAR))
attr->ct_attr.nf_ft = act->ct.flow_table;
attr->ct_attr.act_miss_cookie = act->miss_cookie; attr->ct_attr.act_miss_cookie = act->miss_cookie;
return 0; return 0;
@ -1990,6 +1991,9 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
if (!priv) if (!priv)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (attr->ct_attr.offloaded)
return 0;
if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) { if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) {
err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts, err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts,
0, 0, 0, 0); 0, 0, 0, 0);
@ -1999,11 +2003,15 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
} }
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */ if (!attr->ct_attr.nf_ft) { /* means only ct clear action, and not ct_clear,ct() */
attr->ct_attr.offloaded = true;
return 0; return 0;
}
mutex_lock(&priv->control_lock); mutex_lock(&priv->control_lock);
err = __mlx5_tc_ct_flow_offload(priv, attr); err = __mlx5_tc_ct_flow_offload(priv, attr);
if (!err)
attr->ct_attr.offloaded = true;
mutex_unlock(&priv->control_lock); mutex_unlock(&priv->control_lock);
return err; return err;
@ -2021,7 +2029,7 @@ void
mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv, mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv,
struct mlx5_flow_attr *attr) struct mlx5_flow_attr *attr)
{ {
if (!attr->ct_attr.ft) /* no ct action, return */ if (!attr->ct_attr.offloaded) /* no ct action, return */
return; return;
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */ if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
return; return;

View file

@ -29,6 +29,7 @@ struct mlx5_ct_attr {
u32 ct_labels_id; u32 ct_labels_id;
u32 act_miss_mapping; u32 act_miss_mapping;
u64 act_miss_cookie; u64 act_miss_cookie;
bool offloaded;
struct mlx5_ct_ft *ft; struct mlx5_ct_ft *ft;
}; };

View file

@ -662,8 +662,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
* as we know this is a page_pool page. * as we know this is a page_pool page.
*/ */
page_pool_put_defragged_page(page->pp, page_pool_recycle_direct(page->pp, page);
page, -1, true);
} while (++n < num); } while (++n < num);
break; break;

View file

@ -190,6 +190,7 @@ static int accel_fs_tcp_create_groups(struct mlx5e_flow_table *ft,
in = kvzalloc(inlen, GFP_KERNEL); in = kvzalloc(inlen, GFP_KERNEL);
if (!in || !ft->g) { if (!in || !ft->g) {
kfree(ft->g); kfree(ft->g);
ft->g = NULL;
kvfree(in); kvfree(in);
return -ENOMEM; return -ENOMEM;
} }

View file

@ -390,10 +390,18 @@ static void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
{ {
struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix); struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);
if (rq->xsk_pool) if (rq->xsk_pool) {
mlx5e_xsk_free_rx_wqe(wi); mlx5e_xsk_free_rx_wqe(wi);
else } else {
mlx5e_free_rx_wqe(rq, wi); mlx5e_free_rx_wqe(rq, wi);
/* Avoid a second release of the wqe pages: dealloc is called
* for the same missing wqes on regular RQ flush and on regular
* RQ close. This happens when XSK RQs come into play.
*/
for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++)
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
} }
static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk) static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
@ -1743,11 +1751,11 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
prog = rcu_dereference(rq->xdp_prog); prog = rcu_dereference(rq->xdp_prog);
if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
struct mlx5e_wqe_frag_info *pwi; struct mlx5e_wqe_frag_info *pwi;
for (pwi = head_wi; pwi < wi; pwi++) for (pwi = head_wi; pwi < wi; pwi++)
pwi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); pwi->frag_page->frags++;
} }
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
} }
@ -1817,12 +1825,8 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
rq, wi, cqe, cqe_bcnt); rq, wi, cqe, cqe_bcnt);
if (!skb) { if (!skb) {
/* probably for XDP */ /* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
/* do not return page to cache, wi->frag_page->frags++;
* it will be returned on XDP_TX completion.
*/
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
goto wq_cyc_pop; goto wq_cyc_pop;
} }
@ -1868,12 +1872,8 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
rq, wi, cqe, cqe_bcnt); rq, wi, cqe, cqe_bcnt);
if (!skb) { if (!skb) {
/* probably for XDP */ /* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
/* do not return page to cache, wi->frag_page->frags++;
* it will be returned on XDP_TX completion.
*/
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
goto wq_cyc_pop; goto wq_cyc_pop;
} }
@ -2052,12 +2052,12 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
if (prog) { if (prog) {
if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
int i; struct mlx5e_frag_page *pfp;
for (i = 0; i < sinfo->nr_frags; i++) for (pfp = head_page; pfp < frag_page; pfp++)
/* non-atomic */ pfp->frags++;
__set_bit(page_idx + i, wi->skip_release_bitmap);
return NULL; wi->linear_page.frags++;
} }
mlx5e_page_release_fragmented(rq, &wi->linear_page); mlx5e_page_release_fragmented(rq, &wi->linear_page);
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
@ -2155,7 +2155,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
cqe_bcnt, &mxbuf); cqe_bcnt, &mxbuf);
if (mlx5e_xdp_handle(rq, prog, &mxbuf)) { if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
__set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */ frag_page->frags++;
return NULL; /* page/packet was consumed by XDP */ return NULL; /* page/packet was consumed by XDP */
} }

View file

@ -1639,7 +1639,8 @@ static void remove_unready_flow(struct mlx5e_tc_flow *flow)
uplink_priv = &rpriv->uplink_priv; uplink_priv = &rpriv->uplink_priv;
mutex_lock(&uplink_priv->unready_flows_lock); mutex_lock(&uplink_priv->unready_flows_lock);
unready_flow_del(flow); if (flow_flag_test(flow, NOT_READY))
unready_flow_del(flow);
mutex_unlock(&uplink_priv->unready_flows_lock); mutex_unlock(&uplink_priv->unready_flows_lock);
} }
@ -1932,8 +1933,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
esw_attr = attr->esw_attr; esw_attr = attr->esw_attr;
mlx5e_put_flow_tunnel_id(flow); mlx5e_put_flow_tunnel_id(flow);
if (flow_flag_test(flow, NOT_READY)) remove_unready_flow(flow);
remove_unready_flow(flow);
if (mlx5e_is_offloaded_flow(flow)) { if (mlx5e_is_offloaded_flow(flow)) {
if (flow_flag_test(flow, SLOW)) if (flow_flag_test(flow, SLOW))

View file

@ -807,6 +807,9 @@ static int mlx5_esw_vport_caps_get(struct mlx5_eswitch *esw, struct mlx5_vport *
hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability); hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability);
vport->info.roce_enabled = MLX5_GET(cmd_hca_cap, hca_caps, roce); vport->info.roce_enabled = MLX5_GET(cmd_hca_cap, hca_caps, roce);
if (!MLX5_CAP_GEN_MAX(esw->dev, hca_cap_2))
goto out_free;
memset(query_ctx, 0, query_out_sz); memset(query_ctx, 0, query_out_sz);
err = mlx5_vport_get_other_func_cap(esw->dev, vport->vport, query_ctx, err = mlx5_vport_get_other_func_cap(esw->dev, vport->vport, query_ctx,
MLX5_CAP_GENERAL_2); MLX5_CAP_GENERAL_2);

View file

@ -68,14 +68,19 @@ static struct thermal_zone_device_ops mlx5_thermal_ops = {
int mlx5_thermal_init(struct mlx5_core_dev *mdev) int mlx5_thermal_init(struct mlx5_core_dev *mdev)
{ {
char data[THERMAL_NAME_LENGTH];
struct mlx5_thermal *thermal; struct mlx5_thermal *thermal;
struct thermal_zone_device *tzd; int err;
const char *data = "mlx5";
tzd = thermal_zone_get_zone_by_name(data); if (!mlx5_core_is_pf(mdev) && !mlx5_core_is_ecpf(mdev))
if (!IS_ERR(tzd))
return 0; return 0;
err = snprintf(data, sizeof(data), "mlx5_%s", dev_name(mdev->device));
if (err < 0 || err >= sizeof(data)) {
mlx5_core_err(mdev, "Failed to setup thermal zone name, %d\n", err);
return -EINVAL;
}
thermal = kzalloc(sizeof(*thermal), GFP_KERNEL); thermal = kzalloc(sizeof(*thermal), GFP_KERNEL);
if (!thermal) if (!thermal)
return -ENOMEM; return -ENOMEM;
@ -89,10 +94,10 @@ int mlx5_thermal_init(struct mlx5_core_dev *mdev)
&mlx5_thermal_ops, &mlx5_thermal_ops,
NULL, 0, MLX5_THERMAL_POLL_INT_MSEC); NULL, 0, MLX5_THERMAL_POLL_INT_MSEC);
if (IS_ERR(thermal->tzdev)) { if (IS_ERR(thermal->tzdev)) {
dev_err(mdev->device, "Failed to register thermal zone device (%s) %ld\n", err = PTR_ERR(thermal->tzdev);
data, PTR_ERR(thermal->tzdev)); mlx5_core_err(mdev, "Failed to register thermal zone device (%s) %d\n", data, err);
kfree(thermal); kfree(thermal);
return -EINVAL; return err;
} }
mdev->thermal = thermal; mdev->thermal = thermal;

View file

@ -46,7 +46,7 @@ config LAN743X
tristate "LAN743x support" tristate "LAN743x support"
depends on PCI depends on PCI
depends on PTP_1588_CLOCK_OPTIONAL depends on PTP_1588_CLOCK_OPTIONAL
select PHYLIB select FIXED_PHY
select CRC16 select CRC16
select CRC32 select CRC32
help help

View file

@ -2927,7 +2927,6 @@ int ocelot_init(struct ocelot *ocelot)
mutex_init(&ocelot->mact_lock); mutex_init(&ocelot->mact_lock);
mutex_init(&ocelot->fwd_domain_lock); mutex_init(&ocelot->fwd_domain_lock);
mutex_init(&ocelot->tas_lock);
spin_lock_init(&ocelot->ptp_clock_lock); spin_lock_init(&ocelot->ptp_clock_lock);
spin_lock_init(&ocelot->ts_id_lock); spin_lock_init(&ocelot->ts_id_lock);

View file

@ -67,10 +67,13 @@ void ocelot_port_update_active_preemptible_tcs(struct ocelot *ocelot, int port)
val = mm->preemptible_tcs; val = mm->preemptible_tcs;
/* Cut through switching doesn't work for preemptible priorities, /* Cut through switching doesn't work for preemptible priorities,
* so first make sure it is disabled. * so first make sure it is disabled. Also, changing the preemptible
* TCs affects the oversized frame dropping logic, so that needs to be
* re-triggered. And since tas_guard_bands_update() also implicitly
* calls cut_through_fwd(), we don't need to explicitly call it.
*/ */
mm->active_preemptible_tcs = val; mm->active_preemptible_tcs = val;
ocelot->ops->cut_through_fwd(ocelot); ocelot->ops->tas_guard_bands_update(ocelot, port);
dev_dbg(ocelot->dev, dev_dbg(ocelot->dev,
"port %d %s/%s, MM TX %s, preemptible TCs 0x%x, active 0x%x\n", "port %d %s/%s, MM TX %s, preemptible TCs 0x%x, active 0x%x\n",
@ -89,17 +92,14 @@ void ocelot_port_change_fp(struct ocelot *ocelot, int port,
{ {
struct ocelot_mm_state *mm = &ocelot->mm[port]; struct ocelot_mm_state *mm = &ocelot->mm[port];
mutex_lock(&ocelot->fwd_domain_lock); lockdep_assert_held(&ocelot->fwd_domain_lock);
if (mm->preemptible_tcs == preemptible_tcs) if (mm->preemptible_tcs == preemptible_tcs)
goto out_unlock; return;
mm->preemptible_tcs = preemptible_tcs; mm->preemptible_tcs = preemptible_tcs;
ocelot_port_update_active_preemptible_tcs(ocelot, port); ocelot_port_update_active_preemptible_tcs(ocelot, port);
out_unlock:
mutex_unlock(&ocelot->fwd_domain_lock);
} }
static void ocelot_mm_update_port_status(struct ocelot *ocelot, int port) static void ocelot_mm_update_port_status(struct ocelot *ocelot, int port)

View file

@ -353,12 +353,6 @@ err_out_reset:
ionic_reset(ionic); ionic_reset(ionic);
err_out_teardown: err_out_teardown:
ionic_dev_teardown(ionic); ionic_dev_teardown(ionic);
pci_clear_master(pdev);
/* Don't fail the probe for these errors, keep
* the hw interface around for inspection
*/
return 0;
err_out_unmap_bars: err_out_unmap_bars:
ionic_unmap_bars(ionic); ionic_unmap_bars(ionic);
err_out_pci_release_regions: err_out_pci_release_regions:

View file

@ -475,11 +475,6 @@ static void ionic_qcqs_free(struct ionic_lif *lif)
static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq, static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq,
struct ionic_qcq *n_qcq) struct ionic_qcq *n_qcq)
{ {
if (WARN_ON(n_qcq->flags & IONIC_QCQ_F_INTR)) {
ionic_intr_free(n_qcq->cq.lif->ionic, n_qcq->intr.index);
n_qcq->flags &= ~IONIC_QCQ_F_INTR;
}
n_qcq->intr.vector = src_qcq->intr.vector; n_qcq->intr.vector = src_qcq->intr.vector;
n_qcq->intr.index = src_qcq->intr.index; n_qcq->intr.index = src_qcq->intr.index;
n_qcq->napi_qcq = src_qcq->napi_qcq; n_qcq->napi_qcq = src_qcq->napi_qcq;

View file

@ -186,9 +186,6 @@ static int txgbe_calc_eeprom_checksum(struct wx *wx, u16 *checksum)
if (eeprom_ptrs) if (eeprom_ptrs)
kvfree(eeprom_ptrs); kvfree(eeprom_ptrs);
if (*checksum > TXGBE_EEPROM_SUM)
return -EINVAL;
*checksum = TXGBE_EEPROM_SUM - *checksum; *checksum = TXGBE_EEPROM_SUM - *checksum;
return 0; return 0;

View file

@ -184,13 +184,10 @@ static ssize_t nsim_dev_trap_fa_cookie_write(struct file *file,
cookie_len = (count - 1) / 2; cookie_len = (count - 1) / 2;
if ((count - 1) % 2) if ((count - 1) % 2)
return -EINVAL; return -EINVAL;
buf = kmalloc(count, GFP_KERNEL | __GFP_NOWARN);
if (!buf)
return -ENOMEM;
ret = simple_write_to_buffer(buf, count, ppos, data, count); buf = memdup_user(data, count);
if (ret < 0) if (IS_ERR(buf))
goto free_buf; return PTR_ERR(buf);
fa_cookie = kmalloc(sizeof(*fa_cookie) + cookie_len, fa_cookie = kmalloc(sizeof(*fa_cookie) + cookie_len,
GFP_KERNEL | __GFP_NOWARN); GFP_KERNEL | __GFP_NOWARN);

View file

@ -6157,8 +6157,11 @@ static int airo_get_rate(struct net_device *dev,
struct iw_param *vwrq = &wrqu->bitrate; struct iw_param *vwrq = &wrqu->bitrate;
struct airo_info *local = dev->ml_priv; struct airo_info *local = dev->ml_priv;
StatusRid status_rid; /* Card status info */ StatusRid status_rid; /* Card status info */
int ret;
readStatusRid(local, &status_rid, 1); ret = readStatusRid(local, &status_rid, 1);
if (ret)
return -EBUSY;
vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000; vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000;
/* If more than one rate, set auto */ /* If more than one rate, set auto */

View file

@ -84,7 +84,6 @@ const struct iwl_ht_params iwl_22000_ht_params = {
.mac_addr_from_csr = 0x380, \ .mac_addr_from_csr = 0x380, \
.ht_params = &iwl_22000_ht_params, \ .ht_params = &iwl_22000_ht_params, \
.nvm_ver = IWL_22000_NVM_VERSION, \ .nvm_ver = IWL_22000_NVM_VERSION, \
.trans.use_tfh = true, \
.trans.rf_id = true, \ .trans.rf_id = true, \
.trans.gen2 = true, \ .trans.gen2 = true, \
.nvm_type = IWL_NVM_EXT, \ .nvm_type = IWL_NVM_EXT, \
@ -122,7 +121,6 @@ const struct iwl_ht_params iwl_22000_ht_params = {
const struct iwl_cfg_trans_params iwl_qu_trans_cfg = { const struct iwl_cfg_trans_params iwl_qu_trans_cfg = {
.mq_rx_supported = true, .mq_rx_supported = true,
.use_tfh = true,
.rf_id = true, .rf_id = true,
.gen2 = true, .gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000, .device_family = IWL_DEVICE_FAMILY_22000,
@ -134,7 +132,6 @@ const struct iwl_cfg_trans_params iwl_qu_trans_cfg = {
const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = { const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = {
.mq_rx_supported = true, .mq_rx_supported = true,
.use_tfh = true,
.rf_id = true, .rf_id = true,
.gen2 = true, .gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000, .device_family = IWL_DEVICE_FAMILY_22000,
@ -146,7 +143,6 @@ const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = {
const struct iwl_cfg_trans_params iwl_qu_long_latency_trans_cfg = { const struct iwl_cfg_trans_params iwl_qu_long_latency_trans_cfg = {
.mq_rx_supported = true, .mq_rx_supported = true,
.use_tfh = true,
.rf_id = true, .rf_id = true,
.gen2 = true, .gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000, .device_family = IWL_DEVICE_FAMILY_22000,
@ -200,7 +196,6 @@ const struct iwl_cfg_trans_params iwl_ax200_trans_cfg = {
.device_family = IWL_DEVICE_FAMILY_22000, .device_family = IWL_DEVICE_FAMILY_22000,
.base_params = &iwl_22000_base_params, .base_params = &iwl_22000_base_params,
.mq_rx_supported = true, .mq_rx_supported = true,
.use_tfh = true,
.rf_id = true, .rf_id = true,
.gen2 = true, .gen2 = true,
.bisr_workaround = 1, .bisr_workaround = 1,

View file

@ -256,7 +256,6 @@ enum iwl_cfg_trans_ltr_delay {
* @xtal_latency: power up latency to get the xtal stabilized * @xtal_latency: power up latency to get the xtal stabilized
* @extra_phy_cfg_flags: extra configuration flags to pass to the PHY * @extra_phy_cfg_flags: extra configuration flags to pass to the PHY
* @rf_id: need to read rf_id to determine the firmware image * @rf_id: need to read rf_id to determine the firmware image
* @use_tfh: use TFH
* @gen2: 22000 and on transport operation * @gen2: 22000 and on transport operation
* @mq_rx_supported: multi-queue rx support * @mq_rx_supported: multi-queue rx support
* @integrated: discrete or integrated * @integrated: discrete or integrated
@ -271,7 +270,6 @@ struct iwl_cfg_trans_params {
u32 xtal_latency; u32 xtal_latency;
u32 extra_phy_cfg_flags; u32 extra_phy_cfg_flags;
u32 rf_id:1, u32 rf_id:1,
use_tfh:1,
gen2:1, gen2:1,
mq_rx_supported:1, mq_rx_supported:1,
integrated:1, integrated:1,

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/* /*
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation
* Copyright (C) 2015-2017 Intel Deutschland GmbH * Copyright (C) 2015-2017 Intel Deutschland GmbH
*/ */
#ifndef __iwl_fh_h__ #ifndef __iwl_fh_h__
@ -71,7 +71,7 @@
static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans, static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans,
unsigned int chnl) unsigned int chnl)
{ {
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
WARN_ON_ONCE(chnl >= 64); WARN_ON_ONCE(chnl >= 64);
return TFH_TFDQ_CBB_TABLE + 8 * chnl; return TFH_TFDQ_CBB_TABLE + 8 * chnl;
} }

View file

@ -2,7 +2,7 @@
/* /*
* Copyright (C) 2015 Intel Mobile Communications GmbH * Copyright (C) 2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH * Copyright (C) 2016-2017 Intel Deutschland GmbH
* Copyright (C) 2019-2021 Intel Corporation * Copyright (C) 2019-2021, 2023 Intel Corporation
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/bsearch.h> #include <linux/bsearch.h>
@ -42,7 +42,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty); WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
trans->txqs.tfd.addr_size = 64; trans->txqs.tfd.addr_size = 64;
trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS; trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS;
trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd); trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd);
@ -101,7 +101,7 @@ int iwl_trans_init(struct iwl_trans *trans)
/* Some things must not change even if the config does */ /* Some things must not change even if the config does */
WARN_ON(trans->txqs.tfd.addr_size != WARN_ON(trans->txqs.tfd.addr_size !=
(trans->trans_cfg->use_tfh ? 64 : 36)); (trans->trans_cfg->gen2 ? 64 : 36));
snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name), snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name),
"iwl_cmd_pool:%s", dev_name(trans->dev)); "iwl_cmd_pool:%s", dev_name(trans->dev));

View file

@ -1450,7 +1450,7 @@ static inline bool iwl_mvm_has_new_station_api(const struct iwl_fw *fw)
static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm) static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm)
{ {
/* TODO - replace with TLV once defined */ /* TODO - replace with TLV once defined */
return mvm->trans->trans_cfg->use_tfh; return mvm->trans->trans_cfg->gen2;
} }
static inline bool iwl_mvm_has_unified_ucode(struct iwl_mvm *mvm) static inline bool iwl_mvm_has_unified_ucode(struct iwl_mvm *mvm)

View file

@ -819,7 +819,7 @@ static int iwl_pcie_load_cpu_sections_8000(struct iwl_trans *trans,
iwl_enable_interrupts(trans); iwl_enable_interrupts(trans);
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
if (cpu == 1) if (cpu == 1)
iwl_write_prph(trans, UREG_UCODE_LOAD_STATUS, iwl_write_prph(trans, UREG_UCODE_LOAD_STATUS,
0xFFFF); 0xFFFF);
@ -3394,7 +3394,7 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans,
u8 tfdidx; u8 tfdidx;
u32 caplen, cmdlen; u32 caplen, cmdlen;
if (trans->trans_cfg->use_tfh) if (trans->trans_cfg->gen2)
tfdidx = idx; tfdidx = idx;
else else
tfdidx = ptr; tfdidx = ptr;

View file

@ -364,7 +364,7 @@ void iwl_trans_pcie_tx_reset(struct iwl_trans *trans)
for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues; for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues;
txq_id++) { txq_id++) {
struct iwl_txq *txq = trans->txqs.txq[txq_id]; struct iwl_txq *txq = trans->txqs.txq[txq_id];
if (trans->trans_cfg->use_tfh) if (trans->trans_cfg->gen2)
iwl_write_direct64(trans, iwl_write_direct64(trans,
FH_MEM_CBBC_QUEUE(trans, txq_id), FH_MEM_CBBC_QUEUE(trans, txq_id),
txq->dma_addr); txq->dma_addr);

View file

@ -985,7 +985,7 @@ void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq)
bool active; bool active;
u8 fifo; u8 fifo;
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id, IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id,
txq->read_ptr, txq->write_ptr); txq->read_ptr, txq->write_ptr);
/* TODO: access new SCD registers and dump them */ /* TODO: access new SCD registers and dump them */
@ -1040,7 +1040,7 @@ int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
if (WARN_ON(txq->entries || txq->tfds)) if (WARN_ON(txq->entries || txq->tfds))
return -EINVAL; return -EINVAL;
if (trans->trans_cfg->use_tfh) if (trans->trans_cfg->gen2)
tfd_sz = trans->txqs.tfd.size * slots_num; tfd_sz = trans->txqs.tfd.size * slots_num;
timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0); timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0);
@ -1347,7 +1347,7 @@ static inline dma_addr_t iwl_txq_gen1_tfd_tb_get_addr(struct iwl_trans *trans,
dma_addr_t addr; dma_addr_t addr;
dma_addr_t hi_len; dma_addr_t hi_len;
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd; struct iwl_tfh_tfd *tfh_tfd = _tfd;
struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx];
@ -1408,7 +1408,7 @@ void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans,
meta->tbs = 0; meta->tbs = 0;
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfd_fh = (void *)tfd; struct iwl_tfh_tfd *tfd_fh = (void *)tfd;
tfd_fh->num_tbs = 0; tfd_fh->num_tbs = 0;
@ -1625,7 +1625,7 @@ void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
txq->entries[read_ptr].skb = NULL; txq->entries[read_ptr].skb = NULL;
if (!trans->trans_cfg->use_tfh) if (!trans->trans_cfg->gen2)
iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq); iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq);
iwl_txq_free_tfd(trans, txq); iwl_txq_free_tfd(trans, txq);

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/* /*
* Copyright (C) 2020-2022 Intel Corporation * Copyright (C) 2020-2023 Intel Corporation
*/ */
#ifndef __iwl_trans_queue_tx_h__ #ifndef __iwl_trans_queue_tx_h__
#define __iwl_trans_queue_tx_h__ #define __iwl_trans_queue_tx_h__
@ -38,7 +38,7 @@ static inline void iwl_wake_queue(struct iwl_trans *trans,
static inline void *iwl_txq_get_tfd(struct iwl_trans *trans, static inline void *iwl_txq_get_tfd(struct iwl_trans *trans,
struct iwl_txq *txq, int idx) struct iwl_txq *txq, int idx)
{ {
if (trans->trans_cfg->use_tfh) if (trans->trans_cfg->gen2)
idx = iwl_txq_get_cmd_index(txq, idx); idx = iwl_txq_get_cmd_index(txq, idx);
return (u8 *)txq->tfds + trans->txqs.tfd.size * idx; return (u8 *)txq->tfds + trans->txqs.tfd.size * idx;
@ -135,7 +135,7 @@ static inline u8 iwl_txq_gen1_tfd_get_num_tbs(struct iwl_trans *trans,
{ {
struct iwl_tfd *tfd; struct iwl_tfd *tfd;
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd; struct iwl_tfh_tfd *tfh_tfd = _tfd;
return le16_to_cpu(tfh_tfd->num_tbs) & 0x1f; return le16_to_cpu(tfh_tfd->num_tbs) & 0x1f;
@ -151,7 +151,7 @@ static inline u16 iwl_txq_gen1_tfd_tb_get_len(struct iwl_trans *trans,
struct iwl_tfd *tfd; struct iwl_tfd *tfd;
struct iwl_tfd_tb *tb; struct iwl_tfd_tb *tb;
if (trans->trans_cfg->use_tfh) { if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd; struct iwl_tfh_tfd *tfh_tfd = _tfd;
struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx]; struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx];

View file

@ -231,10 +231,6 @@ int mt7921_dma_init(struct mt7921_dev *dev)
if (ret) if (ret)
return ret; return ret;
ret = mt7921_wfsys_reset(dev);
if (ret)
return ret;
/* init tx queue */ /* init tx queue */
ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7921_TXQ_BAND0, ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7921_TXQ_BAND0,
MT7921_TX_RING_SIZE, MT7921_TX_RING_SIZE,

View file

@ -476,12 +476,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
{ {
int ret; int ret;
ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY);
if (ret && mt76_is_mmio(&dev->mt76)) {
dev_dbg(dev->mt76.dev, "Firmware is already download\n");
goto fw_loaded;
}
ret = mt76_connac2_load_patch(&dev->mt76, mt7921_patch_name(dev)); ret = mt76_connac2_load_patch(&dev->mt76, mt7921_patch_name(dev));
if (ret) if (ret)
return ret; return ret;
@ -504,8 +498,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
return -EIO; return -EIO;
} }
fw_loaded:
#ifdef CONFIG_PM #ifdef CONFIG_PM
dev->mt76.hw->wiphy->wowlan = &mt76_connac_wowlan_support; dev->mt76.hw->wiphy->wowlan = &mt76_connac_wowlan_support;
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */

View file

@ -325,6 +325,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
bus_ops->rmw = mt7921_rmw; bus_ops->rmw = mt7921_rmw;
dev->mt76.bus = bus_ops; dev->mt76.bus = bus_ops;
ret = mt7921e_mcu_fw_pmctrl(dev);
if (ret)
goto err_free_dev;
ret = __mt7921e_mcu_drv_pmctrl(dev); ret = __mt7921e_mcu_drv_pmctrl(dev);
if (ret) if (ret)
goto err_free_dev; goto err_free_dev;
@ -333,6 +337,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
(mt7921_l1_rr(dev, MT_HW_REV) & 0xff); (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev); dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
ret = mt7921_wfsys_reset(dev);
if (ret)
goto err_free_dev;
mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0); mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff); mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);

View file

@ -3026,17 +3026,18 @@ static ssize_t rtw89_debug_priv_send_h2c_set(struct file *filp,
struct rtw89_debugfs_priv *debugfs_priv = filp->private_data; struct rtw89_debugfs_priv *debugfs_priv = filp->private_data;
struct rtw89_dev *rtwdev = debugfs_priv->rtwdev; struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
u8 *h2c; u8 *h2c;
int ret;
u16 h2c_len = count / 2; u16 h2c_len = count / 2;
h2c = rtw89_hex2bin_user(rtwdev, user_buf, count); h2c = rtw89_hex2bin_user(rtwdev, user_buf, count);
if (IS_ERR(h2c)) if (IS_ERR(h2c))
return -EFAULT; return -EFAULT;
rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len); ret = rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len);
kfree(h2c); kfree(h2c);
return count; return ret ? ret : count;
} }
static int static int

View file

@ -36,7 +36,7 @@ static const struct smcd_ops ism_ops;
static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */ static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */
/* a list for fast mapping */ /* a list for fast mapping */
static u8 max_client; static u8 max_client;
static DEFINE_SPINLOCK(clients_lock); static DEFINE_MUTEX(clients_lock);
struct ism_dev_list { struct ism_dev_list {
struct list_head list; struct list_head list;
struct mutex mutex; /* protects ism device list */ struct mutex mutex; /* protects ism device list */
@ -47,14 +47,22 @@ static struct ism_dev_list ism_dev_list = {
.mutex = __MUTEX_INITIALIZER(ism_dev_list.mutex), .mutex = __MUTEX_INITIALIZER(ism_dev_list.mutex),
}; };
static void ism_setup_forwarding(struct ism_client *client, struct ism_dev *ism)
{
unsigned long flags;
spin_lock_irqsave(&ism->lock, flags);
ism->subs[client->id] = client;
spin_unlock_irqrestore(&ism->lock, flags);
}
int ism_register_client(struct ism_client *client) int ism_register_client(struct ism_client *client)
{ {
struct ism_dev *ism; struct ism_dev *ism;
unsigned long flags;
int i, rc = -ENOSPC; int i, rc = -ENOSPC;
mutex_lock(&ism_dev_list.mutex); mutex_lock(&ism_dev_list.mutex);
spin_lock_irqsave(&clients_lock, flags); mutex_lock(&clients_lock);
for (i = 0; i < MAX_CLIENTS; ++i) { for (i = 0; i < MAX_CLIENTS; ++i) {
if (!clients[i]) { if (!clients[i]) {
clients[i] = client; clients[i] = client;
@ -65,12 +73,14 @@ int ism_register_client(struct ism_client *client)
break; break;
} }
} }
spin_unlock_irqrestore(&clients_lock, flags); mutex_unlock(&clients_lock);
if (i < MAX_CLIENTS) { if (i < MAX_CLIENTS) {
/* initialize with all devices that we got so far */ /* initialize with all devices that we got so far */
list_for_each_entry(ism, &ism_dev_list.list, list) { list_for_each_entry(ism, &ism_dev_list.list, list) {
ism->priv[i] = NULL; ism->priv[i] = NULL;
client->add(ism); client->add(ism);
ism_setup_forwarding(client, ism);
} }
} }
mutex_unlock(&ism_dev_list.mutex); mutex_unlock(&ism_dev_list.mutex);
@ -86,25 +96,32 @@ int ism_unregister_client(struct ism_client *client)
int rc = 0; int rc = 0;
mutex_lock(&ism_dev_list.mutex); mutex_lock(&ism_dev_list.mutex);
spin_lock_irqsave(&clients_lock, flags); list_for_each_entry(ism, &ism_dev_list.list, list) {
spin_lock_irqsave(&ism->lock, flags);
/* Stop forwarding IRQs and events */
ism->subs[client->id] = NULL;
for (int i = 0; i < ISM_NR_DMBS; ++i) {
if (ism->sba_client_arr[i] == client->id) {
WARN(1, "%s: attempt to unregister '%s' with registered dmb(s)\n",
__func__, client->name);
rc = -EBUSY;
goto err_reg_dmb;
}
}
spin_unlock_irqrestore(&ism->lock, flags);
}
mutex_unlock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
clients[client->id] = NULL; clients[client->id] = NULL;
if (client->id + 1 == max_client) if (client->id + 1 == max_client)
max_client--; max_client--;
spin_unlock_irqrestore(&clients_lock, flags); mutex_unlock(&clients_lock);
list_for_each_entry(ism, &ism_dev_list.list, list) { return rc;
for (int i = 0; i < ISM_NR_DMBS; ++i) {
if (ism->sba_client_arr[i] == client->id) {
pr_err("%s: attempt to unregister client '%s'"
"with registered dmb(s)\n", __func__,
client->name);
rc = -EBUSY;
goto out;
}
}
}
out:
mutex_unlock(&ism_dev_list.mutex);
err_reg_dmb:
spin_unlock_irqrestore(&ism->lock, flags);
mutex_unlock(&ism_dev_list.mutex);
return rc; return rc;
} }
EXPORT_SYMBOL_GPL(ism_unregister_client); EXPORT_SYMBOL_GPL(ism_unregister_client);
@ -328,6 +345,7 @@ int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb,
struct ism_client *client) struct ism_client *client)
{ {
union ism_reg_dmb cmd; union ism_reg_dmb cmd;
unsigned long flags;
int ret; int ret;
ret = ism_alloc_dmb(ism, dmb); ret = ism_alloc_dmb(ism, dmb);
@ -351,7 +369,9 @@ int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb,
goto out; goto out;
} }
dmb->dmb_tok = cmd.response.dmb_tok; dmb->dmb_tok = cmd.response.dmb_tok;
spin_lock_irqsave(&ism->lock, flags);
ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = client->id; ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = client->id;
spin_unlock_irqrestore(&ism->lock, flags);
out: out:
return ret; return ret;
} }
@ -360,6 +380,7 @@ EXPORT_SYMBOL_GPL(ism_register_dmb);
int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb) int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
{ {
union ism_unreg_dmb cmd; union ism_unreg_dmb cmd;
unsigned long flags;
int ret; int ret;
memset(&cmd, 0, sizeof(cmd)); memset(&cmd, 0, sizeof(cmd));
@ -368,7 +389,9 @@ int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
cmd.request.dmb_tok = dmb->dmb_tok; cmd.request.dmb_tok = dmb->dmb_tok;
spin_lock_irqsave(&ism->lock, flags);
ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = NO_CLIENT; ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = NO_CLIENT;
spin_unlock_irqrestore(&ism->lock, flags);
ret = ism_cmd(ism, &cmd); ret = ism_cmd(ism, &cmd);
if (ret && ret != ISM_ERROR) if (ret && ret != ISM_ERROR)
@ -491,6 +514,7 @@ static u16 ism_get_chid(struct ism_dev *ism)
static void ism_handle_event(struct ism_dev *ism) static void ism_handle_event(struct ism_dev *ism)
{ {
struct ism_event *entry; struct ism_event *entry;
struct ism_client *clt;
int i; int i;
while ((ism->ieq_idx + 1) != READ_ONCE(ism->ieq->header.idx)) { while ((ism->ieq_idx + 1) != READ_ONCE(ism->ieq->header.idx)) {
@ -499,21 +523,21 @@ static void ism_handle_event(struct ism_dev *ism)
entry = &ism->ieq->entry[ism->ieq_idx]; entry = &ism->ieq->entry[ism->ieq_idx];
debug_event(ism_debug_info, 2, entry, sizeof(*entry)); debug_event(ism_debug_info, 2, entry, sizeof(*entry));
spin_lock(&clients_lock); for (i = 0; i < max_client; ++i) {
for (i = 0; i < max_client; ++i) clt = ism->subs[i];
if (clients[i]) if (clt)
clients[i]->handle_event(ism, entry); clt->handle_event(ism, entry);
spin_unlock(&clients_lock); }
} }
} }
static irqreturn_t ism_handle_irq(int irq, void *data) static irqreturn_t ism_handle_irq(int irq, void *data)
{ {
struct ism_dev *ism = data; struct ism_dev *ism = data;
struct ism_client *clt;
unsigned long bit, end; unsigned long bit, end;
unsigned long *bv; unsigned long *bv;
u16 dmbemask; u16 dmbemask;
u8 client_id;
bv = (void *) &ism->sba->dmb_bits[ISM_DMB_WORD_OFFSET]; bv = (void *) &ism->sba->dmb_bits[ISM_DMB_WORD_OFFSET];
end = sizeof(ism->sba->dmb_bits) * BITS_PER_BYTE - ISM_DMB_BIT_OFFSET; end = sizeof(ism->sba->dmb_bits) * BITS_PER_BYTE - ISM_DMB_BIT_OFFSET;
@ -530,8 +554,10 @@ static irqreturn_t ism_handle_irq(int irq, void *data)
dmbemask = ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET]; dmbemask = ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET];
ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
barrier(); barrier();
clt = clients[ism->sba_client_arr[bit]]; client_id = ism->sba_client_arr[bit];
clt->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask); if (unlikely(client_id == NO_CLIENT || !ism->subs[client_id]))
continue;
ism->subs[client_id]->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask);
} }
if (ism->sba->e) { if (ism->sba->e) {
@ -548,20 +574,9 @@ static u64 ism_get_local_gid(struct ism_dev *ism)
return ism->local_gid; return ism->local_gid;
} }
static void ism_dev_add_work_func(struct work_struct *work)
{
struct ism_client *client = container_of(work, struct ism_client,
add_work);
client->add(client->tgt_ism);
atomic_dec(&client->tgt_ism->add_dev_cnt);
wake_up(&client->tgt_ism->waitq);
}
static int ism_dev_init(struct ism_dev *ism) static int ism_dev_init(struct ism_dev *ism)
{ {
struct pci_dev *pdev = ism->pdev; struct pci_dev *pdev = ism->pdev;
unsigned long flags;
int i, ret; int i, ret;
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
@ -594,25 +609,16 @@ static int ism_dev_init(struct ism_dev *ism)
/* hardware is V2 capable */ /* hardware is V2 capable */
ism_create_system_eid(); ism_create_system_eid();
init_waitqueue_head(&ism->waitq);
atomic_set(&ism->free_clients_cnt, 0);
atomic_set(&ism->add_dev_cnt, 0);
wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt));
spin_lock_irqsave(&clients_lock, flags);
for (i = 0; i < max_client; ++i)
if (clients[i]) {
INIT_WORK(&clients[i]->add_work,
ism_dev_add_work_func);
clients[i]->tgt_ism = ism;
atomic_inc(&ism->add_dev_cnt);
schedule_work(&clients[i]->add_work);
}
spin_unlock_irqrestore(&clients_lock, flags);
wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt));
mutex_lock(&ism_dev_list.mutex); mutex_lock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
for (i = 0; i < max_client; ++i) {
if (clients[i]) {
clients[i]->add(ism);
ism_setup_forwarding(clients[i], ism);
}
}
mutex_unlock(&clients_lock);
list_add(&ism->list, &ism_dev_list.list); list_add(&ism->list, &ism_dev_list.list);
mutex_unlock(&ism_dev_list.mutex); mutex_unlock(&ism_dev_list.mutex);
@ -687,36 +693,24 @@ err_dev:
return ret; return ret;
} }
static void ism_dev_remove_work_func(struct work_struct *work)
{
struct ism_client *client = container_of(work, struct ism_client,
remove_work);
client->remove(client->tgt_ism);
atomic_dec(&client->tgt_ism->free_clients_cnt);
wake_up(&client->tgt_ism->waitq);
}
/* Callers must hold ism_dev_list.mutex */
static void ism_dev_exit(struct ism_dev *ism) static void ism_dev_exit(struct ism_dev *ism)
{ {
struct pci_dev *pdev = ism->pdev; struct pci_dev *pdev = ism->pdev;
unsigned long flags; unsigned long flags;
int i; int i;
wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt)); spin_lock_irqsave(&ism->lock, flags);
spin_lock_irqsave(&clients_lock, flags);
for (i = 0; i < max_client; ++i) for (i = 0; i < max_client; ++i)
if (clients[i]) { ism->subs[i] = NULL;
INIT_WORK(&clients[i]->remove_work, spin_unlock_irqrestore(&ism->lock, flags);
ism_dev_remove_work_func);
clients[i]->tgt_ism = ism;
atomic_inc(&ism->free_clients_cnt);
schedule_work(&clients[i]->remove_work);
}
spin_unlock_irqrestore(&clients_lock, flags);
wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt)); mutex_lock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
for (i = 0; i < max_client; ++i) {
if (clients[i])
clients[i]->remove(ism);
}
mutex_unlock(&clients_lock);
if (SYSTEM_EID.serial_number[0] != '0' || if (SYSTEM_EID.serial_number[0] != '0' ||
SYSTEM_EID.type[0] != '0') SYSTEM_EID.type[0] != '0')
@ -727,15 +721,14 @@ static void ism_dev_exit(struct ism_dev *ism)
kfree(ism->sba_client_arr); kfree(ism->sba_client_arr);
pci_free_irq_vectors(pdev); pci_free_irq_vectors(pdev);
list_del_init(&ism->list); list_del_init(&ism->list);
mutex_unlock(&ism_dev_list.mutex);
} }
static void ism_remove(struct pci_dev *pdev) static void ism_remove(struct pci_dev *pdev)
{ {
struct ism_dev *ism = dev_get_drvdata(&pdev->dev); struct ism_dev *ism = dev_get_drvdata(&pdev->dev);
mutex_lock(&ism_dev_list.mutex);
ism_dev_exit(ism); ism_dev_exit(ism);
mutex_unlock(&ism_dev_list.mutex);
pci_release_mem_regions(pdev); pci_release_mem_regions(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);

View file

@ -44,9 +44,7 @@ struct ism_dev {
u64 local_gid; u64 local_gid;
int ieq_idx; int ieq_idx;
atomic_t free_clients_cnt; struct ism_client *subs[MAX_CLIENTS];
atomic_t add_dev_cnt;
wait_queue_head_t waitq;
}; };
struct ism_event { struct ism_event {
@ -68,9 +66,6 @@ struct ism_client {
*/ */
void (*handle_irq)(struct ism_dev *dev, unsigned int bit, u16 dmbemask); void (*handle_irq)(struct ism_dev *dev, unsigned int bit, u16 dmbemask);
/* Private area - don't touch! */ /* Private area - don't touch! */
struct work_struct remove_work;
struct work_struct add_work;
struct ism_dev *tgt_ism;
u8 id; u8 id;
}; };

View file

@ -67,6 +67,9 @@ struct nf_conntrack_tuple {
/* The protocol. */ /* The protocol. */
u_int8_t protonum; u_int8_t protonum;
/* The direction must be ignored for the tuplehash */
struct { } __nfct_hash_offsetend;
/* The direction (for tuplehash) */ /* The direction (for tuplehash) */
u_int8_t dir; u_int8_t dir;
} dst; } dst;

View file

@ -1211,6 +1211,29 @@ int __nft_release_basechain(struct nft_ctx *ctx);
unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv); unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);
static inline bool nft_use_inc(u32 *use)
{
if (*use == UINT_MAX)
return false;
(*use)++;
return true;
}
static inline void nft_use_dec(u32 *use)
{
WARN_ON_ONCE((*use)-- == 0);
}
/* For error and abort path: restore use counter to previous state. */
static inline void nft_use_inc_restore(u32 *use)
{
WARN_ON_ONCE(!nft_use_inc(use));
}
#define nft_use_dec_restore nft_use_dec
/** /**
* struct nft_table - nf_tables table * struct nft_table - nf_tables table
* *
@ -1296,8 +1319,8 @@ struct nft_object {
struct list_head list; struct list_head list;
struct rhlist_head rhlhead; struct rhlist_head rhlhead;
struct nft_object_hash_key key; struct nft_object_hash_key key;
u32 genmask:2, u32 genmask:2;
use:30; u32 use;
u64 handle; u64 handle;
u16 udlen; u16 udlen;
u8 *udata; u8 *udata;
@ -1399,8 +1422,8 @@ struct nft_flowtable {
char *name; char *name;
int hooknum; int hooknum;
int ops_len; int ops_len;
u32 genmask:2, u32 genmask:2;
use:30; u32 use;
u64 handle; u64 handle;
/* runtime data below here */ /* runtime data below here */
struct list_head hook_list ____cacheline_aligned; struct list_head hook_list ____cacheline_aligned;

View file

@ -134,7 +134,7 @@ extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
*/ */
static inline unsigned int psched_mtu(const struct net_device *dev) static inline unsigned int psched_mtu(const struct net_device *dev)
{ {
return dev->mtu + dev->hard_header_len; return READ_ONCE(dev->mtu) + dev->hard_header_len;
} }
static inline struct net *qdisc_net(struct Qdisc *q) static inline struct net *qdisc_net(struct Qdisc *q)

View file

@ -663,6 +663,7 @@ struct ocelot_ops {
struct flow_stats *stats); struct flow_stats *stats);
void (*cut_through_fwd)(struct ocelot *ocelot); void (*cut_through_fwd)(struct ocelot *ocelot);
void (*tas_clock_adjust)(struct ocelot *ocelot); void (*tas_clock_adjust)(struct ocelot *ocelot);
void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
void (*update_stats)(struct ocelot *ocelot); void (*update_stats)(struct ocelot *ocelot);
}; };
@ -863,12 +864,12 @@ struct ocelot {
struct mutex stat_view_lock; struct mutex stat_view_lock;
/* Lock for serializing access to the MAC table */ /* Lock for serializing access to the MAC table */
struct mutex mact_lock; struct mutex mact_lock;
/* Lock for serializing forwarding domain changes */ /* Lock for serializing forwarding domain changes, including the
* configuration of the Time-Aware Shaper, MAC Merge layer and
* cut-through forwarding, on which it depends
*/
struct mutex fwd_domain_lock; struct mutex fwd_domain_lock;
/* Lock for serializing Time-Aware Shaper changes */
struct mutex tas_lock;
struct workqueue_struct *owq; struct workqueue_struct *owq;
u8 ptp:1; u8 ptp:1;

View file

@ -122,22 +122,6 @@ static void get_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
atomic_inc(&rcpu->refcnt); atomic_inc(&rcpu->refcnt);
} }
/* called from workqueue, to workaround syscall using preempt_disable */
static void cpu_map_kthread_stop(struct work_struct *work)
{
struct bpf_cpu_map_entry *rcpu;
rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);
/* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,
* as it waits until all in-flight call_rcu() callbacks complete.
*/
rcu_barrier();
/* kthread_stop will wake_up_process and wait for it to complete */
kthread_stop(rcpu->kthread);
}
static void __cpu_map_ring_cleanup(struct ptr_ring *ring) static void __cpu_map_ring_cleanup(struct ptr_ring *ring)
{ {
/* The tear-down procedure should have made sure that queue is /* The tear-down procedure should have made sure that queue is
@ -165,6 +149,30 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
} }
} }
/* called from workqueue, to workaround syscall using preempt_disable */
static void cpu_map_kthread_stop(struct work_struct *work)
{
struct bpf_cpu_map_entry *rcpu;
int err;
rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);
/* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,
* as it waits until all in-flight call_rcu() callbacks complete.
*/
rcu_barrier();
/* kthread_stop will wake_up_process and wait for it to complete */
err = kthread_stop(rcpu->kthread);
if (err) {
/* kthread_stop may be called before cpu_map_kthread_run
* is executed, so we need to release the memory related
* to rcpu.
*/
put_cpu_map_entry(rcpu);
}
}
static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,
struct list_head *listp, struct list_head *listp,
struct xdp_cpumap_stats *stats) struct xdp_cpumap_stats *stats)

View file

@ -5642,8 +5642,9 @@ continue_func:
verbose(env, "verifier bug. subprog has tail_call and async cb\n"); verbose(env, "verifier bug. subprog has tail_call and async cb\n");
return -EFAULT; return -EFAULT;
} }
/* async callbacks don't increase bpf prog stack size */ /* async callbacks don't increase bpf prog stack size unless called directly */
continue; if (!bpf_pseudo_call(insn + i))
continue;
} }
i = next_insn; i = next_insn;

View file

@ -63,4 +63,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(napi_poll);
EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_send_reset); EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_send_reset);
EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_bad_csum); EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_bad_csum);
EXPORT_TRACEPOINT_SYMBOL_GPL(udp_fail_queue_rcv_skb);
EXPORT_TRACEPOINT_SYMBOL_GPL(sk_data_ready); EXPORT_TRACEPOINT_SYMBOL_GPL(sk_data_ready);

View file

@ -4261,6 +4261,11 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
skb_push(skb, -skb_network_offset(skb) + offset); skb_push(skb, -skb_network_offset(skb) + offset);
/* Ensure the head is writeable before touching the shared info */
err = skb_unclone(skb, GFP_ATOMIC);
if (err)
goto err_linearize;
skb_shinfo(skb)->frag_list = NULL; skb_shinfo(skb)->frag_list = NULL;
while (list_skb) { while (list_skb) {

View file

@ -741,7 +741,7 @@ __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
__diag_pop(); __diag_pop();
BTF_SET8_START(xdp_metadata_kfunc_ids) BTF_SET8_START(xdp_metadata_kfunc_ids)
#define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, 0) #define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS)
XDP_METADATA_KFUNC_xxx XDP_METADATA_KFUNC_xxx
#undef XDP_METADATA_KFUNC #undef XDP_METADATA_KFUNC
BTF_SET8_END(xdp_metadata_kfunc_ids) BTF_SET8_END(xdp_metadata_kfunc_ids)

View file

@ -318,9 +318,8 @@ static void addrconf_del_dad_work(struct inet6_ifaddr *ifp)
static void addrconf_mod_rs_timer(struct inet6_dev *idev, static void addrconf_mod_rs_timer(struct inet6_dev *idev,
unsigned long when) unsigned long when)
{ {
if (!timer_pending(&idev->rs_timer)) if (!mod_timer(&idev->rs_timer, jiffies + when))
in6_dev_hold(idev); in6_dev_hold(idev);
mod_timer(&idev->rs_timer, jiffies + when);
} }
static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp, static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,

View file

@ -424,7 +424,10 @@ static struct net_device *icmp6_dev(const struct sk_buff *skb)
if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) { if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {
const struct rt6_info *rt6 = skb_rt6_info(skb); const struct rt6_info *rt6 = skb_rt6_info(skb);
if (rt6) /* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.),
* and ip6_null_entry could be set to skb if no route is found.
*/
if (rt6 && rt6->rt6i_idev)
dev = rt6->rt6i_idev->dev; dev = rt6->rt6i_idev->dev;
} }

View file

@ -45,6 +45,7 @@
#include <net/tcp_states.h> #include <net/tcp_states.h>
#include <net/ip6_checksum.h> #include <net/ip6_checksum.h>
#include <net/ip6_tunnel.h> #include <net/ip6_tunnel.h>
#include <trace/events/udp.h>
#include <net/xfrm.h> #include <net/xfrm.h>
#include <net/inet_hashtables.h> #include <net/inet_hashtables.h>
#include <net/inet6_hashtables.h> #include <net/inet6_hashtables.h>
@ -90,7 +91,7 @@ static u32 udp6_ehashfn(const struct net *net,
fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret); fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret);
return __inet6_ehashfn(lhash, lport, fhash, fport, return __inet6_ehashfn(lhash, lport, fhash, fport,
udp_ipv6_hash_secret + net_hash_mix(net)); udp6_ehash_secret + net_hash_mix(net));
} }
int udp_v6_get_port(struct sock *sk, unsigned short snum) int udp_v6_get_port(struct sock *sk, unsigned short snum)
@ -680,6 +681,7 @@ static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
} }
UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite); UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
kfree_skb_reason(skb, drop_reason); kfree_skb_reason(skb, drop_reason);
trace_udp_fail_queue_rcv_skb(rc, sk);
return -1; return -1;
} }

View file

@ -211,24 +211,18 @@ static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
unsigned int zoneid, unsigned int zoneid,
const struct net *net) const struct net *net)
{ {
u64 a, b, c, d; siphash_key_t key;
get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd)); get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd));
/* The direction must be ignored, handle usable tuplehash members manually */ key = nf_conntrack_hash_rnd;
a = (u64)tuple->src.u3.all[0] << 32 | tuple->src.u3.all[3];
b = (u64)tuple->dst.u3.all[0] << 32 | tuple->dst.u3.all[3];
c = (__force u64)tuple->src.u.all << 32 | (__force u64)tuple->dst.u.all << 16; key.key[0] ^= zoneid;
c |= tuple->dst.protonum; key.key[1] ^= net_hash_mix(net);
d = (u64)zoneid << 32 | net_hash_mix(net); return siphash((void *)tuple,
offsetofend(struct nf_conntrack_tuple, dst.__nfct_hash_offsetend),
/* IPv4: u3.all[1,2,3] == 0 */ &key);
c ^= (u64)tuple->src.u3.all[1] << 32 | tuple->src.u3.all[2];
d += (u64)tuple->dst.u3.all[1] << 32 | tuple->dst.u3.all[2];
return (u32)siphash_4u64(a, b, c, d, &nf_conntrack_hash_rnd);
} }
static u32 scale_hash(u32 hash) static u32 scale_hash(u32 hash)

View file

@ -360,6 +360,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES); BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1); BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
if (!nf_ct_helper_hash)
return -ENOENT;
if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT) if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
return -EINVAL; return -EINVAL;
@ -515,4 +518,5 @@ int nf_conntrack_helper_init(void)
void nf_conntrack_helper_fini(void) void nf_conntrack_helper_fini(void)
{ {
kvfree(nf_ct_helper_hash); kvfree(nf_ct_helper_hash);
nf_ct_helper_hash = NULL;
} }

View file

@ -205,6 +205,8 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
enum ip_conntrack_info ctinfo, enum ip_conntrack_info ctinfo,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
unsigned long status;
if (!nf_ct_is_confirmed(ct)) { if (!nf_ct_is_confirmed(ct)) {
unsigned int *timeouts = nf_ct_timeout_lookup(ct); unsigned int *timeouts = nf_ct_timeout_lookup(ct);
@ -217,11 +219,17 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
ct->proto.gre.timeout = timeouts[GRE_CT_UNREPLIED]; ct->proto.gre.timeout = timeouts[GRE_CT_UNREPLIED];
} }
status = READ_ONCE(ct->status);
/* If we've seen traffic both ways, this is a GRE connection. /* If we've seen traffic both ways, this is a GRE connection.
* Extend timeout. */ * Extend timeout. */
if (ct->status & IPS_SEEN_REPLY) { if (status & IPS_SEEN_REPLY) {
nf_ct_refresh_acct(ct, ctinfo, skb, nf_ct_refresh_acct(ct, ctinfo, skb,
ct->proto.gre.stream_timeout); ct->proto.gre.stream_timeout);
/* never set ASSURED for IPS_NAT_CLASH, they time out soon */
if (unlikely((status & IPS_NAT_CLASH)))
return NF_ACCEPT;
/* Also, more likely to be important, and not a probe. */ /* Also, more likely to be important, and not a probe. */
if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status)) if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
nf_conntrack_event_cache(IPCT_ASSURED, ct); nf_conntrack_event_cache(IPCT_ASSURED, ct);

View file

@ -253,8 +253,10 @@ int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
if (chain->bound) if (chain->bound)
return -EBUSY; return -EBUSY;
if (!nft_use_inc(&chain->use))
return -EMFILE;
chain->bound = true; chain->bound = true;
chain->use++;
nft_chain_trans_bind(ctx, chain); nft_chain_trans_bind(ctx, chain);
return 0; return 0;
@ -437,7 +439,7 @@ static int nft_delchain(struct nft_ctx *ctx)
if (IS_ERR(trans)) if (IS_ERR(trans))
return PTR_ERR(trans); return PTR_ERR(trans);
ctx->table->use--; nft_use_dec(&ctx->table->use);
nft_deactivate_next(ctx->net, ctx->chain); nft_deactivate_next(ctx->net, ctx->chain);
return 0; return 0;
@ -476,7 +478,7 @@ nf_tables_delrule_deactivate(struct nft_ctx *ctx, struct nft_rule *rule)
/* You cannot delete the same rule twice */ /* You cannot delete the same rule twice */
if (nft_is_active_next(ctx->net, rule)) { if (nft_is_active_next(ctx->net, rule)) {
nft_deactivate_next(ctx->net, rule); nft_deactivate_next(ctx->net, rule);
ctx->chain->use--; nft_use_dec(&ctx->chain->use);
return 0; return 0;
} }
return -ENOENT; return -ENOENT;
@ -644,7 +646,7 @@ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
nft_map_deactivate(ctx, set); nft_map_deactivate(ctx, set);
nft_deactivate_next(ctx->net, set); nft_deactivate_next(ctx->net, set);
ctx->table->use--; nft_use_dec(&ctx->table->use);
return err; return err;
} }
@ -676,7 +678,7 @@ static int nft_delobj(struct nft_ctx *ctx, struct nft_object *obj)
return err; return err;
nft_deactivate_next(ctx->net, obj); nft_deactivate_next(ctx->net, obj);
ctx->table->use--; nft_use_dec(&ctx->table->use);
return err; return err;
} }
@ -711,7 +713,7 @@ static int nft_delflowtable(struct nft_ctx *ctx,
return err; return err;
nft_deactivate_next(ctx->net, flowtable); nft_deactivate_next(ctx->net, flowtable);
ctx->table->use--; nft_use_dec(&ctx->table->use);
return err; return err;
} }
@ -2396,9 +2398,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
struct nft_chain *chain; struct nft_chain *chain;
int err; int err;
if (table->use == UINT_MAX)
return -EOVERFLOW;
if (nla[NFTA_CHAIN_HOOK]) { if (nla[NFTA_CHAIN_HOOK]) {
struct nft_stats __percpu *stats = NULL; struct nft_stats __percpu *stats = NULL;
struct nft_chain_hook hook = {}; struct nft_chain_hook hook = {};
@ -2494,6 +2493,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
if (err < 0) if (err < 0)
goto err_destroy_chain; goto err_destroy_chain;
if (!nft_use_inc(&table->use)) {
err = -EMFILE;
goto err_use;
}
trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN); trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN);
if (IS_ERR(trans)) { if (IS_ERR(trans)) {
err = PTR_ERR(trans); err = PTR_ERR(trans);
@ -2510,10 +2514,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
goto err_unregister_hook; goto err_unregister_hook;
} }
table->use++;
return 0; return 0;
err_unregister_hook: err_unregister_hook:
nft_use_dec_restore(&table->use);
err_use:
nf_tables_unregister_hook(net, table, chain); nf_tables_unregister_hook(net, table, chain);
err_destroy_chain: err_destroy_chain:
nf_tables_chain_destroy(ctx); nf_tables_chain_destroy(ctx);
@ -2694,7 +2699,7 @@ err_hooks:
static struct nft_chain *nft_chain_lookup_byid(const struct net *net, static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
const struct nft_table *table, const struct nft_table *table,
const struct nlattr *nla) const struct nlattr *nla, u8 genmask)
{ {
struct nftables_pernet *nft_net = nft_pernet(net); struct nftables_pernet *nft_net = nft_pernet(net);
u32 id = ntohl(nla_get_be32(nla)); u32 id = ntohl(nla_get_be32(nla));
@ -2705,7 +2710,8 @@ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
if (trans->msg_type == NFT_MSG_NEWCHAIN && if (trans->msg_type == NFT_MSG_NEWCHAIN &&
chain->table == table && chain->table == table &&
id == nft_trans_chain_id(trans)) id == nft_trans_chain_id(trans) &&
nft_active_genmask(chain, genmask))
return chain; return chain;
} }
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
@ -3809,7 +3815,8 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} else if (nla[NFTA_RULE_CHAIN_ID]) { } else if (nla[NFTA_RULE_CHAIN_ID]) {
chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]); chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID],
genmask);
if (IS_ERR(chain)) { if (IS_ERR(chain)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]); NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
return PTR_ERR(chain); return PTR_ERR(chain);
@ -3840,9 +3847,6 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return -EINVAL; return -EINVAL;
handle = nf_tables_alloc_handle(table); handle = nf_tables_alloc_handle(table);
if (chain->use == UINT_MAX)
return -EOVERFLOW;
if (nla[NFTA_RULE_POSITION]) { if (nla[NFTA_RULE_POSITION]) {
pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
old_rule = __nft_rule_lookup(chain, pos_handle); old_rule = __nft_rule_lookup(chain, pos_handle);
@ -3936,6 +3940,11 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
} }
} }
if (!nft_use_inc(&chain->use)) {
err = -EMFILE;
goto err_release_rule;
}
if (info->nlh->nlmsg_flags & NLM_F_REPLACE) { if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
err = nft_delrule(&ctx, old_rule); err = nft_delrule(&ctx, old_rule);
if (err < 0) if (err < 0)
@ -3967,7 +3976,6 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
} }
} }
kvfree(expr_info); kvfree(expr_info);
chain->use++;
if (flow) if (flow)
nft_trans_flow_rule(trans) = flow; nft_trans_flow_rule(trans) = flow;
@ -3978,6 +3986,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return 0; return 0;
err_destroy_flow_rule: err_destroy_flow_rule:
nft_use_dec_restore(&chain->use);
if (flow) if (flow)
nft_flow_rule_destroy(flow); nft_flow_rule_destroy(flow);
err_release_rule: err_release_rule:
@ -5014,9 +5023,15 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
alloc_size = sizeof(*set) + size + udlen; alloc_size = sizeof(*set) + size + udlen;
if (alloc_size < size || alloc_size > INT_MAX) if (alloc_size < size || alloc_size > INT_MAX)
return -ENOMEM; return -ENOMEM;
if (!nft_use_inc(&table->use))
return -EMFILE;
set = kvzalloc(alloc_size, GFP_KERNEL_ACCOUNT); set = kvzalloc(alloc_size, GFP_KERNEL_ACCOUNT);
if (!set) if (!set) {
return -ENOMEM; err = -ENOMEM;
goto err_alloc;
}
name = nla_strdup(nla[NFTA_SET_NAME], GFP_KERNEL_ACCOUNT); name = nla_strdup(nla[NFTA_SET_NAME], GFP_KERNEL_ACCOUNT);
if (!name) { if (!name) {
@ -5074,7 +5089,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
goto err_set_expr_alloc; goto err_set_expr_alloc;
list_add_tail_rcu(&set->list, &table->sets); list_add_tail_rcu(&set->list, &table->sets);
table->use++;
return 0; return 0;
err_set_expr_alloc: err_set_expr_alloc:
@ -5086,6 +5101,9 @@ err_set_init:
kfree(set->name); kfree(set->name);
err_set_name: err_set_name:
kvfree(set); kvfree(set);
err_alloc:
nft_use_dec_restore(&table->use);
return err; return err;
} }
@ -5224,9 +5242,6 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *i; struct nft_set_binding *i;
struct nft_set_iter iter; struct nft_set_iter iter;
if (set->use == UINT_MAX)
return -EOVERFLOW;
if (!list_empty(&set->bindings) && nft_set_is_anonymous(set)) if (!list_empty(&set->bindings) && nft_set_is_anonymous(set))
return -EBUSY; return -EBUSY;
@ -5254,10 +5269,12 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
return iter.err; return iter.err;
} }
bind: bind:
if (!nft_use_inc(&set->use))
return -EMFILE;
binding->chain = ctx->chain; binding->chain = ctx->chain;
list_add_tail_rcu(&binding->list, &set->bindings); list_add_tail_rcu(&binding->list, &set->bindings);
nft_set_trans_bind(ctx, set); nft_set_trans_bind(ctx, set);
set->use++;
return 0; return 0;
} }
@ -5331,7 +5348,7 @@ void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
nft_clear(ctx->net, set); nft_clear(ctx->net, set);
} }
set->use++; nft_use_inc_restore(&set->use);
} }
EXPORT_SYMBOL_GPL(nf_tables_activate_set); EXPORT_SYMBOL_GPL(nf_tables_activate_set);
@ -5347,7 +5364,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
else else
list_del_rcu(&binding->list); list_del_rcu(&binding->list);
set->use--; nft_use_dec(&set->use);
break; break;
case NFT_TRANS_PREPARE: case NFT_TRANS_PREPARE:
if (nft_set_is_anonymous(set)) { if (nft_set_is_anonymous(set)) {
@ -5356,7 +5373,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
nft_deactivate_next(ctx->net, set); nft_deactivate_next(ctx->net, set);
} }
set->use--; nft_use_dec(&set->use);
return; return;
case NFT_TRANS_ABORT: case NFT_TRANS_ABORT:
case NFT_TRANS_RELEASE: case NFT_TRANS_RELEASE:
@ -5364,7 +5381,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
set->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_deactivate(ctx, set); nft_map_deactivate(ctx, set);
set->use--; nft_use_dec(&set->use);
fallthrough; fallthrough;
default: default:
nf_tables_unbind_set(ctx, set, binding, nf_tables_unbind_set(ctx, set, binding,
@ -6155,7 +6172,7 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext)); nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext));
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use--; nft_use_dec(&(*nft_set_ext_obj(ext))->use);
kfree(elem); kfree(elem);
} }
EXPORT_SYMBOL_GPL(nft_set_elem_destroy); EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
@ -6657,8 +6674,16 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
set->objtype, genmask); set->objtype, genmask);
if (IS_ERR(obj)) { if (IS_ERR(obj)) {
err = PTR_ERR(obj); err = PTR_ERR(obj);
obj = NULL;
goto err_parse_key_end; goto err_parse_key_end;
} }
if (!nft_use_inc(&obj->use)) {
err = -EMFILE;
obj = NULL;
goto err_parse_key_end;
}
err = nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF); err = nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF);
if (err < 0) if (err < 0)
goto err_parse_key_end; goto err_parse_key_end;
@ -6727,10 +6752,9 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
if (flags) if (flags)
*nft_set_ext_flags(ext) = flags; *nft_set_ext_flags(ext) = flags;
if (obj) { if (obj)
*nft_set_ext_obj(ext) = obj; *nft_set_ext_obj(ext) = obj;
obj->use++;
}
if (ulen > 0) { if (ulen > 0) {
if (nft_set_ext_check(&tmpl, NFT_SET_EXT_USERDATA, ulen) < 0) { if (nft_set_ext_check(&tmpl, NFT_SET_EXT_USERDATA, ulen) < 0) {
err = -EINVAL; err = -EINVAL;
@ -6798,12 +6822,13 @@ err_element_clash:
kfree(trans); kfree(trans);
err_elem_free: err_elem_free:
nf_tables_set_elem_destroy(ctx, set, elem.priv); nf_tables_set_elem_destroy(ctx, set, elem.priv);
if (obj)
obj->use--;
err_parse_data: err_parse_data:
if (nla[NFTA_SET_ELEM_DATA] != NULL) if (nla[NFTA_SET_ELEM_DATA] != NULL)
nft_data_release(&elem.data.val, desc.type); nft_data_release(&elem.data.val, desc.type);
err_parse_key_end: err_parse_key_end:
if (obj)
nft_use_dec_restore(&obj->use);
nft_data_release(&elem.key_end.val, NFT_DATA_VALUE); nft_data_release(&elem.key_end.val, NFT_DATA_VALUE);
err_parse_key: err_parse_key:
nft_data_release(&elem.key.val, NFT_DATA_VALUE); nft_data_release(&elem.key.val, NFT_DATA_VALUE);
@ -6883,7 +6908,7 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
case NFT_JUMP: case NFT_JUMP:
case NFT_GOTO: case NFT_GOTO:
chain = data->verdict.chain; chain = data->verdict.chain;
chain->use++; nft_use_inc_restore(&chain->use);
break; break;
} }
} }
@ -6898,7 +6923,7 @@ static void nft_setelem_data_activate(const struct net *net,
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_hold(nft_set_ext_data(ext), set->dtype); nft_data_hold(nft_set_ext_data(ext), set->dtype);
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use++; nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use);
} }
static void nft_setelem_data_deactivate(const struct net *net, static void nft_setelem_data_deactivate(const struct net *net,
@ -6910,7 +6935,7 @@ static void nft_setelem_data_deactivate(const struct net *net,
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_release(nft_set_ext_data(ext), set->dtype); nft_data_release(nft_set_ext_data(ext), set->dtype);
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use--; nft_use_dec(&(*nft_set_ext_obj(ext))->use);
} }
static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
@ -7453,9 +7478,14 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (!nft_use_inc(&table->use))
return -EMFILE;
type = nft_obj_type_get(net, objtype); type = nft_obj_type_get(net, objtype);
if (IS_ERR(type)) if (IS_ERR(type)) {
return PTR_ERR(type); err = PTR_ERR(type);
goto err_type;
}
obj = nft_obj_init(&ctx, type, nla[NFTA_OBJ_DATA]); obj = nft_obj_init(&ctx, type, nla[NFTA_OBJ_DATA]);
if (IS_ERR(obj)) { if (IS_ERR(obj)) {
@ -7489,7 +7519,7 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
goto err_obj_ht; goto err_obj_ht;
list_add_tail_rcu(&obj->list, &table->objects); list_add_tail_rcu(&obj->list, &table->objects);
table->use++;
return 0; return 0;
err_obj_ht: err_obj_ht:
/* queued in transaction log */ /* queued in transaction log */
@ -7505,6 +7535,9 @@ err_strdup:
kfree(obj); kfree(obj);
err_init: err_init:
module_put(type->owner); module_put(type->owner);
err_type:
nft_use_dec_restore(&table->use);
return err; return err;
} }
@ -7906,7 +7939,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
case NFT_TRANS_PREPARE: case NFT_TRANS_PREPARE:
case NFT_TRANS_ABORT: case NFT_TRANS_ABORT:
case NFT_TRANS_RELEASE: case NFT_TRANS_RELEASE:
flowtable->use--; nft_use_dec(&flowtable->use);
fallthrough; fallthrough;
default: default:
return; return;
@ -8260,9 +8293,14 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (!nft_use_inc(&table->use))
return -EMFILE;
flowtable = kzalloc(sizeof(*flowtable), GFP_KERNEL_ACCOUNT); flowtable = kzalloc(sizeof(*flowtable), GFP_KERNEL_ACCOUNT);
if (!flowtable) if (!flowtable) {
return -ENOMEM; err = -ENOMEM;
goto flowtable_alloc;
}
flowtable->table = table; flowtable->table = table;
flowtable->handle = nf_tables_alloc_handle(table); flowtable->handle = nf_tables_alloc_handle(table);
@ -8317,7 +8355,6 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
goto err5; goto err5;
list_add_tail_rcu(&flowtable->list, &table->flowtables); list_add_tail_rcu(&flowtable->list, &table->flowtables);
table->use++;
return 0; return 0;
err5: err5:
@ -8334,6 +8371,9 @@ err2:
kfree(flowtable->name); kfree(flowtable->name);
err1: err1:
kfree(flowtable); kfree(flowtable);
flowtable_alloc:
nft_use_dec_restore(&table->use);
return err; return err;
} }
@ -9713,7 +9753,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
*/ */
if (nft_set_is_anonymous(nft_trans_set(trans)) && if (nft_set_is_anonymous(nft_trans_set(trans)) &&
!list_empty(&nft_trans_set(trans)->bindings)) !list_empty(&nft_trans_set(trans)->bindings))
trans->ctx.table->use--; nft_use_dec(&trans->ctx.table->use);
} }
nf_tables_set_notify(&trans->ctx, nft_trans_set(trans), nf_tables_set_notify(&trans->ctx, nft_trans_set(trans),
NFT_MSG_NEWSET, GFP_KERNEL); NFT_MSG_NEWSET, GFP_KERNEL);
@ -9943,7 +9983,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans); nft_trans_destroy(trans);
break; break;
} }
trans->ctx.table->use--; nft_use_dec_restore(&trans->ctx.table->use);
nft_chain_del(trans->ctx.chain); nft_chain_del(trans->ctx.chain);
nf_tables_unregister_hook(trans->ctx.net, nf_tables_unregister_hook(trans->ctx.net,
trans->ctx.table, trans->ctx.table,
@ -9956,7 +9996,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
list_splice(&nft_trans_chain_hooks(trans), list_splice(&nft_trans_chain_hooks(trans),
&nft_trans_basechain(trans)->hook_list); &nft_trans_basechain(trans)->hook_list);
} else { } else {
trans->ctx.table->use++; nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, trans->ctx.chain); nft_clear(trans->ctx.net, trans->ctx.chain);
} }
nft_trans_destroy(trans); nft_trans_destroy(trans);
@ -9966,7 +10006,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans); nft_trans_destroy(trans);
break; break;
} }
trans->ctx.chain->use--; nft_use_dec_restore(&trans->ctx.chain->use);
list_del_rcu(&nft_trans_rule(trans)->list); list_del_rcu(&nft_trans_rule(trans)->list);
nft_rule_expr_deactivate(&trans->ctx, nft_rule_expr_deactivate(&trans->ctx,
nft_trans_rule(trans), nft_trans_rule(trans),
@ -9976,7 +10016,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
break; break;
case NFT_MSG_DELRULE: case NFT_MSG_DELRULE:
case NFT_MSG_DESTROYRULE: case NFT_MSG_DESTROYRULE:
trans->ctx.chain->use++; nft_use_inc_restore(&trans->ctx.chain->use);
nft_clear(trans->ctx.net, nft_trans_rule(trans)); nft_clear(trans->ctx.net, nft_trans_rule(trans));
nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans)); nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans));
if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
@ -9989,7 +10029,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans); nft_trans_destroy(trans);
break; break;
} }
trans->ctx.table->use--; nft_use_dec_restore(&trans->ctx.table->use);
if (nft_trans_set_bound(trans)) { if (nft_trans_set_bound(trans)) {
nft_trans_destroy(trans); nft_trans_destroy(trans);
break; break;
@ -9998,7 +10038,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
break; break;
case NFT_MSG_DELSET: case NFT_MSG_DELSET:
case NFT_MSG_DESTROYSET: case NFT_MSG_DESTROYSET:
trans->ctx.table->use++; nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_set(trans)); nft_clear(trans->ctx.net, nft_trans_set(trans));
if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_activate(&trans->ctx, nft_trans_set(trans)); nft_map_activate(&trans->ctx, nft_trans_set(trans));
@ -10042,13 +10082,13 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans)); nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
nft_trans_destroy(trans); nft_trans_destroy(trans);
} else { } else {
trans->ctx.table->use--; nft_use_dec_restore(&trans->ctx.table->use);
nft_obj_del(nft_trans_obj(trans)); nft_obj_del(nft_trans_obj(trans));
} }
break; break;
case NFT_MSG_DELOBJ: case NFT_MSG_DELOBJ:
case NFT_MSG_DESTROYOBJ: case NFT_MSG_DESTROYOBJ:
trans->ctx.table->use++; nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_obj(trans)); nft_clear(trans->ctx.net, nft_trans_obj(trans));
nft_trans_destroy(trans); nft_trans_destroy(trans);
break; break;
@ -10057,7 +10097,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_unregister_flowtable_net_hooks(net, nft_unregister_flowtable_net_hooks(net,
&nft_trans_flowtable_hooks(trans)); &nft_trans_flowtable_hooks(trans));
} else { } else {
trans->ctx.table->use--; nft_use_dec_restore(&trans->ctx.table->use);
list_del_rcu(&nft_trans_flowtable(trans)->list); list_del_rcu(&nft_trans_flowtable(trans)->list);
nft_unregister_flowtable_net_hooks(net, nft_unregister_flowtable_net_hooks(net,
&nft_trans_flowtable(trans)->hook_list); &nft_trans_flowtable(trans)->hook_list);
@ -10069,7 +10109,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
list_splice(&nft_trans_flowtable_hooks(trans), list_splice(&nft_trans_flowtable_hooks(trans),
&nft_trans_flowtable(trans)->hook_list); &nft_trans_flowtable(trans)->hook_list);
} else { } else {
trans->ctx.table->use++; nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_flowtable(trans)); nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
} }
nft_trans_destroy(trans); nft_trans_destroy(trans);
@ -10502,7 +10542,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
genmask); genmask);
} else if (tb[NFTA_VERDICT_CHAIN_ID]) { } else if (tb[NFTA_VERDICT_CHAIN_ID]) {
chain = nft_chain_lookup_byid(ctx->net, ctx->table, chain = nft_chain_lookup_byid(ctx->net, ctx->table,
tb[NFTA_VERDICT_CHAIN_ID]); tb[NFTA_VERDICT_CHAIN_ID],
genmask);
if (IS_ERR(chain)) if (IS_ERR(chain))
return PTR_ERR(chain); return PTR_ERR(chain);
} else { } else {
@ -10518,8 +10559,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
if (desc->flags & NFT_DATA_DESC_SETELEM && if (desc->flags & NFT_DATA_DESC_SETELEM &&
chain->flags & NFT_CHAIN_BINDING) chain->flags & NFT_CHAIN_BINDING)
return -EINVAL; return -EINVAL;
if (!nft_use_inc(&chain->use))
return -EMFILE;
chain->use++;
data->verdict.chain = chain; data->verdict.chain = chain;
break; break;
} }
@ -10537,7 +10579,7 @@ static void nft_verdict_uninit(const struct nft_data *data)
case NFT_JUMP: case NFT_JUMP:
case NFT_GOTO: case NFT_GOTO:
chain = data->verdict.chain; chain = data->verdict.chain;
chain->use--; nft_use_dec(&chain->use);
break; break;
} }
} }
@ -10706,11 +10748,11 @@ int __nft_release_basechain(struct nft_ctx *ctx)
nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain); nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain);
list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) { list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) {
list_del(&rule->list); list_del(&rule->list);
ctx->chain->use--; nft_use_dec(&ctx->chain->use);
nf_tables_rule_release(ctx, rule); nf_tables_rule_release(ctx, rule);
} }
nft_chain_del(ctx->chain); nft_chain_del(ctx->chain);
ctx->table->use--; nft_use_dec(&ctx->table->use);
nf_tables_chain_destroy(ctx); nf_tables_chain_destroy(ctx);
return 0; return 0;
@ -10760,18 +10802,18 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
ctx.chain = chain; ctx.chain = chain;
list_for_each_entry_safe(rule, nr, &chain->rules, list) { list_for_each_entry_safe(rule, nr, &chain->rules, list) {
list_del(&rule->list); list_del(&rule->list);
chain->use--; nft_use_dec(&chain->use);
nf_tables_rule_release(&ctx, rule); nf_tables_rule_release(&ctx, rule);
} }
} }
list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) { list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
list_del(&flowtable->list); list_del(&flowtable->list);
table->use--; nft_use_dec(&table->use);
nf_tables_flowtable_destroy(flowtable); nf_tables_flowtable_destroy(flowtable);
} }
list_for_each_entry_safe(set, ns, &table->sets, list) { list_for_each_entry_safe(set, ns, &table->sets, list) {
list_del(&set->list); list_del(&set->list);
table->use--; nft_use_dec(&table->use);
if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT)) if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_deactivate(&ctx, set); nft_map_deactivate(&ctx, set);
@ -10779,13 +10821,13 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
} }
list_for_each_entry_safe(obj, ne, &table->objects, list) { list_for_each_entry_safe(obj, ne, &table->objects, list) {
nft_obj_del(obj); nft_obj_del(obj);
table->use--; nft_use_dec(&table->use);
nft_obj_destroy(&ctx, obj); nft_obj_destroy(&ctx, obj);
} }
list_for_each_entry_safe(chain, nc, &table->chains, list) { list_for_each_entry_safe(chain, nc, &table->chains, list) {
ctx.chain = chain; ctx.chain = chain;
nft_chain_del(chain); nft_chain_del(chain);
table->use--; nft_use_dec(&table->use);
nf_tables_chain_destroy(&ctx); nf_tables_chain_destroy(&ctx);
} }
nf_tables_table_destroy(&ctx); nf_tables_table_destroy(&ctx);

View file

@ -30,11 +30,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
const struct nft_byteorder *priv = nft_expr_priv(expr); const struct nft_byteorder *priv = nft_expr_priv(expr);
u32 *src = &regs->data[priv->sreg]; u32 *src = &regs->data[priv->sreg];
u32 *dst = &regs->data[priv->dreg]; u32 *dst = &regs->data[priv->dreg];
union { u32 u32; u16 u16; } *s, *d; u16 *s16, *d16;
unsigned int i; unsigned int i;
s = (void *)src; s16 = (void *)src;
d = (void *)dst; d16 = (void *)dst;
switch (priv->size) { switch (priv->size) {
case 8: { case 8: {
@ -62,11 +62,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
switch (priv->op) { switch (priv->op) {
case NFT_BYTEORDER_NTOH: case NFT_BYTEORDER_NTOH:
for (i = 0; i < priv->len / 4; i++) for (i = 0; i < priv->len / 4; i++)
d[i].u32 = ntohl((__force __be32)s[i].u32); dst[i] = ntohl((__force __be32)src[i]);
break; break;
case NFT_BYTEORDER_HTON: case NFT_BYTEORDER_HTON:
for (i = 0; i < priv->len / 4; i++) for (i = 0; i < priv->len / 4; i++)
d[i].u32 = (__force __u32)htonl(s[i].u32); dst[i] = (__force __u32)htonl(src[i]);
break; break;
} }
break; break;
@ -74,11 +74,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
switch (priv->op) { switch (priv->op) {
case NFT_BYTEORDER_NTOH: case NFT_BYTEORDER_NTOH:
for (i = 0; i < priv->len / 2; i++) for (i = 0; i < priv->len / 2; i++)
d[i].u16 = ntohs((__force __be16)s[i].u16); d16[i] = ntohs((__force __be16)s16[i]);
break; break;
case NFT_BYTEORDER_HTON: case NFT_BYTEORDER_HTON:
for (i = 0; i < priv->len / 2; i++) for (i = 0; i < priv->len / 2; i++)
d[i].u16 = (__force __u16)htons(s[i].u16); d16[i] = (__force __u16)htons(s16[i]);
break; break;
} }
break; break;

View file

@ -408,8 +408,10 @@ static int nft_flow_offload_init(const struct nft_ctx *ctx,
if (IS_ERR(flowtable)) if (IS_ERR(flowtable))
return PTR_ERR(flowtable); return PTR_ERR(flowtable);
if (!nft_use_inc(&flowtable->use))
return -EMFILE;
priv->flowtable = flowtable; priv->flowtable = flowtable;
flowtable->use++;
return nf_ct_netns_get(ctx->net, ctx->family); return nf_ct_netns_get(ctx->net, ctx->family);
} }
@ -428,7 +430,7 @@ static void nft_flow_offload_activate(const struct nft_ctx *ctx,
{ {
struct nft_flow_offload *priv = nft_expr_priv(expr); struct nft_flow_offload *priv = nft_expr_priv(expr);
priv->flowtable->use++; nft_use_inc_restore(&priv->flowtable->use);
} }
static void nft_flow_offload_destroy(const struct nft_ctx *ctx, static void nft_flow_offload_destroy(const struct nft_ctx *ctx,

View file

@ -159,7 +159,7 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
default: default:
nft_chain_del(chain); nft_chain_del(chain);
chain->bound = false; chain->bound = false;
chain->table->use--; nft_use_dec(&chain->table->use);
break; break;
} }
break; break;
@ -198,7 +198,7 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
* let the transaction records release this chain and its rules. * let the transaction records release this chain and its rules.
*/ */
if (chain->bound) { if (chain->bound) {
chain->use--; nft_use_dec(&chain->use);
break; break;
} }
@ -206,9 +206,9 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
chain_ctx = *ctx; chain_ctx = *ctx;
chain_ctx.chain = chain; chain_ctx.chain = chain;
chain->use--; nft_use_dec(&chain->use);
list_for_each_entry_safe(rule, n, &chain->rules, list) { list_for_each_entry_safe(rule, n, &chain->rules, list) {
chain->use--; nft_use_dec(&chain->use);
list_del(&rule->list); list_del(&rule->list);
nf_tables_rule_destroy(&chain_ctx, rule); nf_tables_rule_destroy(&chain_ctx, rule);
} }

View file

@ -41,8 +41,10 @@ static int nft_objref_init(const struct nft_ctx *ctx,
if (IS_ERR(obj)) if (IS_ERR(obj))
return -ENOENT; return -ENOENT;
if (!nft_use_inc(&obj->use))
return -EMFILE;
nft_objref_priv(expr) = obj; nft_objref_priv(expr) = obj;
obj->use++;
return 0; return 0;
} }
@ -72,7 +74,7 @@ static void nft_objref_deactivate(const struct nft_ctx *ctx,
if (phase == NFT_TRANS_COMMIT) if (phase == NFT_TRANS_COMMIT)
return; return;
obj->use--; nft_use_dec(&obj->use);
} }
static void nft_objref_activate(const struct nft_ctx *ctx, static void nft_objref_activate(const struct nft_ctx *ctx,
@ -80,7 +82,7 @@ static void nft_objref_activate(const struct nft_ctx *ctx,
{ {
struct nft_object *obj = nft_objref_priv(expr); struct nft_object *obj = nft_objref_priv(expr);
obj->use++; nft_use_inc_restore(&obj->use);
} }
static const struct nft_expr_ops nft_objref_ops = { static const struct nft_expr_ops nft_objref_ops = {

View file

@ -1320,7 +1320,7 @@ struct tc_action_ops *tc_action_load_ops(struct nlattr *nla, bool police,
return ERR_PTR(err); return ERR_PTR(err);
} }
} else { } else {
if (strlcpy(act_name, "police", IFNAMSIZ) >= IFNAMSIZ) { if (strscpy(act_name, "police", IFNAMSIZ) < 0) {
NL_SET_ERR_MSG(extack, "TC action name too long"); NL_SET_ERR_MSG(extack, "TC action name too long");
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }

View file

@ -812,6 +812,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src, TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src,
TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src)); TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) {
NL_SET_ERR_MSG(extack,
"Both min and max destination ports must be specified");
return -EINVAL;
}
if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) {
NL_SET_ERR_MSG(extack,
"Both min and max source ports must be specified");
return -EINVAL;
}
if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst && if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
ntohs(key->tp_range.tp_max.dst) <= ntohs(key->tp_range.tp_max.dst) <=
ntohs(key->tp_range.tp_min.dst)) { ntohs(key->tp_range.tp_min.dst)) {

View file

@ -212,11 +212,6 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
if (err < 0) if (err < 0)
return err; return err;
if (tb[TCA_FW_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
if (tb[TCA_FW_INDEV]) { if (tb[TCA_FW_INDEV]) {
int ret; int ret;
ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack); ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack);
@ -233,6 +228,11 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
} else if (head->mask != 0xFFFFFFFF) } else if (head->mask != 0xFFFFFFFF)
return err; return err;
if (tb[TCA_FW_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
return 0; return 0;
} }

View file

@ -381,8 +381,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight,
u32 lmax) u32 lmax)
{ {
struct qfq_sched *q = qdisc_priv(sch); struct qfq_sched *q = qdisc_priv(sch);
struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight); struct qfq_aggregate *new_agg;
/* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */
if (lmax > QFQ_MAX_LMAX)
return -EINVAL;
new_agg = qfq_find_agg(q, lmax, weight);
if (new_agg == NULL) { /* create new aggregate */ if (new_agg == NULL) { /* create new aggregate */
new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC); new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC);
if (new_agg == NULL) if (new_agg == NULL)
@ -423,10 +428,17 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
else else
weight = 1; weight = 1;
if (tb[TCA_QFQ_LMAX]) if (tb[TCA_QFQ_LMAX]) {
lmax = nla_get_u32(tb[TCA_QFQ_LMAX]); lmax = nla_get_u32(tb[TCA_QFQ_LMAX]);
else } else {
/* MTU size is user controlled */
lmax = psched_mtu(qdisc_dev(sch)); lmax = psched_mtu(qdisc_dev(sch));
if (lmax < QFQ_MIN_LMAX || lmax > QFQ_MAX_LMAX) {
NL_SET_ERR_MSG_MOD(extack,
"MTU size out of bounds for qfq");
return -EINVAL;
}
}
inv_w = ONE_FP / weight; inv_w = ONE_FP / weight;
weight = ONE_FP / inv_w; weight = ONE_FP / inv_w;

View file

@ -580,6 +580,8 @@ int ieee80211_strip_8023_mesh_hdr(struct sk_buff *skb)
hdrlen += ETH_ALEN + 2; hdrlen += ETH_ALEN + 2;
else if (!pskb_may_pull(skb, hdrlen)) else if (!pskb_may_pull(skb, hdrlen))
return -EINVAL; return -EINVAL;
else
payload.eth.h_proto = htons(skb->len - hdrlen);
mesh_addr = skb->data + sizeof(payload.eth) + ETH_ALEN; mesh_addr = skb->data + sizeof(payload.eth) + ETH_ALEN;
switch (payload.flags & MESH_FLAGS_AE) { switch (payload.flags & MESH_FLAGS_AE) {

View file

@ -0,0 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
#include "async_stack_depth.skel.h"
void test_async_stack_depth(void)
{
RUN_TESTS(async_stack_depth);
}

View file

@ -0,0 +1,40 @@
// SPDX-License-Identifier: GPL-2.0
#include <vmlinux.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
struct hmap_elem {
struct bpf_timer timer;
};
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 64);
__type(key, int);
__type(value, struct hmap_elem);
} hmap SEC(".maps");
__attribute__((noinline))
static int timer_cb(void *map, int *key, struct bpf_timer *timer)
{
volatile char buf[256] = {};
return buf[69];
}
SEC("tc")
__failure __msg("combined stack size of 2 calls")
int prog(struct __sk_buff *ctx)
{
struct hmap_elem *elem;
volatile char buf[256] = {};
elem = bpf_map_lookup_elem(&hmap, &(int){0});
if (!elem)
return 0;
timer_cb(NULL, NULL, NULL);
return bpf_timer_set_callback(&elem->timer, timer_cb) + buf[0];
}
char _license[] SEC("license") = "GPL";

View file

@ -213,5 +213,91 @@
"$TC qdisc del dev $DUMMY handle 1: root", "$TC qdisc del dev $DUMMY handle 1: root",
"$IP link del dev $DUMMY type dummy" "$IP link del dev $DUMMY type dummy"
] ]
},
{
"id": "85ee",
"name": "QFQ with big MTU",
"category": [
"qdisc",
"qfq"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 2147483647 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"expExitCode": "2",
"verifyCmd": "$TC class show dev $DUMMY",
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
"$IP link del dev $DUMMY type dummy"
]
},
{
"id": "ddfa",
"name": "QFQ with small MTU",
"category": [
"qdisc",
"qfq"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 256 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"expExitCode": "2",
"verifyCmd": "$TC class show dev $DUMMY",
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
"$IP link del dev $DUMMY type dummy"
]
},
{
"id": "5993",
"name": "QFQ with stab overhead greater than max packet len",
"category": [
"qdisc",
"qfq",
"scapy"
],
"plugins": {
"requires": [
"nsPlugin",
"scapyPlugin"
]
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY up || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: stab mtu 2048 tsize 512 mpu 0 overhead 999999999 linklayer ethernet root qfq",
"$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"$TC qdisc add dev $DEV1 clsact",
"$TC filter add dev $DEV1 ingress protocol ip flower dst_ip 1.3.3.7/32 action mirred egress mirror dev $DUMMY"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: matchall classid 1:1",
"scapy": [
{
"iface": "$DEV0",
"count": 22,
"packet": "Ether(type=0x800)/IP(src='10.0.0.10',dst='1.3.3.7')/TCP(sport=5000,dport=10)"
}
],
"expExitCode": "0",
"verifyCmd": "$TC -s qdisc ls dev $DUMMY",
"matchPattern": "dropped 22",
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root qfq"
]
} }
] ]