Networking fixes for 6.5-rc2, including fixes from netfilter,

wireless and ebpf
 
 Current release - regressions:
 
   - netfilter: conntrack: gre: don't set assured flag for clash entries
 
   - wifi: iwlwifi: remove 'use_tfh' config to fix crash
 
 Previous releases - regressions:
 
   - ipv6: fix a potential refcount underflow for idev
 
   - icmp6: ifix null-ptr-deref of ip6_null_entry->rt6i_idev in icmp6_dev()
 
   - bpf: fix max stack depth check for async callbacks
 
   - eth: mlx5e:
     - check for NOT_READY flag state after locking
     - fix page_pool page fragment tracking for XDP
 
   - eth: igc:
     - fix tx hang issue when QBV gate is closed
     - fix corner cases for TSN offload
 
   - eth: octeontx2-af: Move validation of ptp pointer before its usage
 
   - eth: ena: fix shift-out-of-bounds in exponential backoff
 
 Previous releases - always broken:
 
   - core: prevent skb corruption on frag list segmentation
 
   - sched:
     - cls_fw: fix improper refcount update leads to use-after-free
     - sch_qfq: account for stab overhead in qfq_enqueue
 
   - netfilter:
     - report use refcount overflow
     - prevent OOB access in nft_byteorder_eval
 
   - wifi: mt7921e: fix init command fail with enabled device
 
   - eth: ocelot: fix oversize frame dropping for preemptible TCs
 
   - eth: fec: recycle pages for transmitted XDP frames
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmSv1YISHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkpQgP/1msj0MlIWJnMgzPiMonDSe746JGTg/j
 YengEjqcy3ozC4COBEeyBO6ilt6I+Wrb5H5jimn9h2djB+D7htWNaejQaqJrBxph
 F4lUC6OJqd2ncI3tXAG2BSX1duzDr6B7yL7d5InFIczw8vNh+chsyX0sjlzU12bt
 ppjcSb+Ffc796DB0ItJkBqluxcpjyXE15ZWTTV4GEHK6RoRdxNIGjd7NgvD8podB
 Q/464bHs1jJYkAavuobiOXV2fuxWLTs77E0Vloizoo+42UiRFMLJk+RX98PhSIMa
 eejkxfm+H6+6Qi2omYepvf7vDN3GtLjxbr5C3mTdWPuL4QbNY8agVJ7sS4XnL5/v
 B7EAjyGQK9SmD36zTu7QL/Ul6fSnRq8jz20B0mDa0imAWzi58A+jqbQAMoVOMSS+
 Uv4yKJpIUyx7mUI77+EX3U9r1wytw5eniatTDU+GAsQb2CJ43CqDmn/7RcmGacBo
 P1q+il9JW4kzUQrisUSxmQDfpBvQi5wiygiEdUNI5FEhq6/iKe/lrJnmJZpaLkd5
 P3oEKjapamAmcyrEr/7VD1Mb4jrRfpB7zVn/5OyvywbcLQxA+531iPpy4r4W6cWH
 1MRLBVVHKyb3jfm8J3T4lpDEzd03+MiPS8JiKMUYYNUYkY8tYp92muwC7z2sGI4M
 6eR2MeKD4vds
 =cELX
 -----END PGP SIGNATURE-----

Merge tag 'net-6.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from netfilter, wireless and ebpf.

  Current release - regressions:

   - netfilter: conntrack: gre: don't set assured flag for clash entries

   - wifi: iwlwifi: remove 'use_tfh' config to fix crash

  Previous releases - regressions:

   - ipv6: fix a potential refcount underflow for idev

   - icmp6: ifix null-ptr-deref of ip6_null_entry->rt6i_idev in
     icmp6_dev()

   - bpf: fix max stack depth check for async callbacks

   - eth: mlx5e:
      - check for NOT_READY flag state after locking
      - fix page_pool page fragment tracking for XDP

   - eth: igc:
      - fix tx hang issue when QBV gate is closed
      - fix corner cases for TSN offload

   - eth: octeontx2-af: Move validation of ptp pointer before its usage

   - eth: ena: fix shift-out-of-bounds in exponential backoff

  Previous releases - always broken:

   - core: prevent skb corruption on frag list segmentation

   - sched:
      - cls_fw: fix improper refcount update leads to use-after-free
      - sch_qfq: account for stab overhead in qfq_enqueue

   - netfilter:
      - report use refcount overflow
      - prevent OOB access in nft_byteorder_eval

   - wifi: mt7921e: fix init command fail with enabled device

   - eth: ocelot: fix oversize frame dropping for preemptible TCs

   - eth: fec: recycle pages for transmitted XDP frames"

* tag 'net-6.5-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (79 commits)
  selftests: tc-testing: add test for qfq with stab overhead
  net/sched: sch_qfq: account for stab overhead in qfq_enqueue
  selftests: tc-testing: add tests for qfq mtu sanity check
  net/sched: sch_qfq: reintroduce lmax bound check for MTU
  wifi: cfg80211: fix receiving mesh packets without RFC1042 header
  wifi: rtw89: debug: fix error code in rtw89_debug_priv_send_h2c_set()
  net: txgbe: fix eeprom calculation error
  net/sched: make psched_mtu() RTNL-less safe
  net: ena: fix shift-out-of-bounds in exponential backoff
  netdevsim: fix uninitialized data in nsim_dev_trap_fa_cookie_write()
  net/sched: flower: Ensure both minimum and maximum ports are specified
  MAINTAINERS: Add another mailing list for QUALCOMM ETHQOS ETHERNET DRIVER
  docs: netdev: update the URL of the status page
  wifi: iwlwifi: remove 'use_tfh' config to fix crash
  xdp: use trusted arguments in XDP hints kfuncs
  bpf: cpumap: Fix memory leak in cpu_map_update_elem
  wifi: airo: avoid uninitialized warning in airo_get_rate()
  octeontx2-pf: Add additional check for MCAM rules
  net: dsa: Removed unneeded of_node_put in felix_parse_ports_node
  net: fec: use netdev_err_once() instead of netdev_err()
  ...
This commit is contained in:
Linus Torvalds 2023-07-13 14:21:22 -07:00
commit b1983d427a
91 changed files with 1018 additions and 528 deletions

View file

@ -98,7 +98,7 @@ If you aren't subscribed to netdev and/or are simply unsure if
repository link above for any new networking-related commits. You may
also check the following website for the current status:
http://vger.kernel.org/~davem/net-next.html
https://patchwork.hopto.org/net-next.html
The ``net`` tree continues to collect fixes for the vX.Y content, and is
fed back to Linus at regular (~weekly) intervals. Meaning that the

View file

@ -17543,6 +17543,7 @@ QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org>
R: Bhupesh Sharma <bhupesh.sharma@linaro.org>
L: netdev@vger.kernel.org
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/net/qcom,ethqos.yaml
F: drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c

View file

@ -69,7 +69,7 @@ struct rv_jit_context {
struct bpf_prog *prog;
u16 *insns; /* RV insns */
int ninsns;
int body_len;
int prologue_len;
int epilogue_offset;
int *offset; /* BPF to RV */
int nexentries;
@ -216,8 +216,8 @@ static inline int rv_offset(int insn, int off, struct rv_jit_context *ctx)
int from, to;
off++; /* BPF branch is from PC+1, RV is from PC */
from = (insn > 0) ? ctx->offset[insn - 1] : 0;
to = (insn + off > 0) ? ctx->offset[insn + off - 1] : 0;
from = (insn > 0) ? ctx->offset[insn - 1] : ctx->prologue_len;
to = (insn + off > 0) ? ctx->offset[insn + off - 1] : ctx->prologue_len;
return ninsns_rvoff(to - from);
}

View file

@ -44,7 +44,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
unsigned int prog_size = 0, extable_size = 0;
bool tmp_blinded = false, extra_pass = false;
struct bpf_prog *tmp, *orig_prog = prog;
int pass = 0, prev_ninsns = 0, prologue_len, i;
int pass = 0, prev_ninsns = 0, i;
struct rv_jit_data *jit_data;
struct rv_jit_context *ctx;
@ -83,6 +83,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
prog = orig_prog;
goto out_offset;
}
if (build_body(ctx, extra_pass, NULL)) {
prog = orig_prog;
goto out_offset;
}
for (i = 0; i < prog->len; i++) {
prev_ninsns += 32;
ctx->offset[i] = prev_ninsns;
@ -91,12 +97,15 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
for (i = 0; i < NR_JIT_ITERATIONS; i++) {
pass++;
ctx->ninsns = 0;
bpf_jit_build_prologue(ctx);
ctx->prologue_len = ctx->ninsns;
if (build_body(ctx, extra_pass, ctx->offset)) {
prog = orig_prog;
goto out_offset;
}
ctx->body_len = ctx->ninsns;
bpf_jit_build_prologue(ctx);
ctx->epilogue_offset = ctx->ninsns;
bpf_jit_build_epilogue(ctx);
@ -162,10 +171,8 @@ skip_init_ctx:
if (!prog->is_func || extra_pass) {
bpf_jit_binary_lock_ro(jit_data->header);
prologue_len = ctx->epilogue_offset - ctx->body_len;
for (i = 0; i < prog->len; i++)
ctx->offset[i] = ninsns_rvoff(prologue_len +
ctx->offset[i]);
ctx->offset[i] = ninsns_rvoff(ctx->offset[i]);
bpf_prog_fill_jited_linfo(prog, ctx->offset);
out_offset:
kfree(ctx->offset);

View file

@ -1286,7 +1286,6 @@ static int felix_parse_ports_node(struct felix *felix,
if (err < 0) {
dev_info(dev, "Unsupported PHY mode %s on port %d\n",
phy_modes(phy_mode), port);
of_node_put(child);
/* Leave port_phy_modes[port] = 0, which is also
* PHY_INTERFACE_MODE_NA. This will perform a
@ -1786,16 +1785,15 @@ static int felix_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
{
struct ocelot *ocelot = ds->priv;
struct ocelot_port *ocelot_port = ocelot->ports[port];
struct felix *felix = ocelot_to_felix(ocelot);
ocelot_port_set_maxlen(ocelot, port, new_mtu);
mutex_lock(&ocelot->tas_lock);
mutex_lock(&ocelot->fwd_domain_lock);
if (ocelot_port->taprio && felix->info->tas_guard_bands_update)
felix->info->tas_guard_bands_update(ocelot, port);
if (ocelot_port->taprio && ocelot->ops->tas_guard_bands_update)
ocelot->ops->tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
return 0;
}

View file

@ -57,7 +57,6 @@ struct felix_info {
void (*mdio_bus_free)(struct ocelot *ocelot);
int (*port_setup_tc)(struct dsa_switch *ds, int port,
enum tc_setup_type type, void *type_data);
void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
void (*port_sched_speed_set)(struct ocelot *ocelot, int port,
u32 speed);
void (*phylink_mac_config)(struct ocelot *ocelot, int port,

View file

@ -1209,15 +1209,17 @@ static u32 vsc9959_tas_tc_max_sdu(struct tc_taprio_qopt_offload *taprio, int tc)
static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
{
struct ocelot_port *ocelot_port = ocelot->ports[port];
struct ocelot_mm_state *mm = &ocelot->mm[port];
struct tc_taprio_qopt_offload *taprio;
u64 min_gate_len[OCELOT_NUM_TC];
u32 val, maxlen, add_frag_size;
u64 needed_min_frag_time_ps;
int speed, picos_per_byte;
u64 needed_bit_time_ps;
u32 val, maxlen;
u8 tas_speed;
int tc;
lockdep_assert_held(&ocelot->tas_lock);
lockdep_assert_held(&ocelot->fwd_domain_lock);
taprio = ocelot_port->taprio;
@ -1253,14 +1255,21 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
*/
needed_bit_time_ps = (u64)(maxlen + 24) * picos_per_byte;
/* Preemptible TCs don't need to pass a full MTU, the port will
* automatically emit a HOLD request when a preemptible TC gate closes
*/
val = ocelot_read_rix(ocelot, QSYS_PREEMPTION_CFG, port);
add_frag_size = QSYS_PREEMPTION_CFG_MM_ADD_FRAG_SIZE_X(val);
needed_min_frag_time_ps = picos_per_byte *
(u64)(24 + 2 * ethtool_mm_frag_size_add_to_min(add_frag_size));
dev_dbg(ocelot->dev,
"port %d: max frame size %d needs %llu ps at speed %d\n",
port, maxlen, needed_bit_time_ps, speed);
"port %d: max frame size %d needs %llu ps, %llu ps for mPackets at speed %d\n",
port, maxlen, needed_bit_time_ps, needed_min_frag_time_ps,
speed);
vsc9959_tas_min_gate_lengths(taprio, min_gate_len);
mutex_lock(&ocelot->fwd_domain_lock);
for (tc = 0; tc < OCELOT_NUM_TC; tc++) {
u32 requested_max_sdu = vsc9959_tas_tc_max_sdu(taprio, tc);
u64 remaining_gate_len_ps;
@ -1269,7 +1278,9 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
remaining_gate_len_ps =
vsc9959_tas_remaining_gate_len_ps(min_gate_len[tc]);
if (remaining_gate_len_ps > needed_bit_time_ps) {
if ((mm->active_preemptible_tcs & BIT(tc)) ?
remaining_gate_len_ps > needed_min_frag_time_ps :
remaining_gate_len_ps > needed_bit_time_ps) {
/* Setting QMAXSDU_CFG to 0 disables oversized frame
* dropping.
*/
@ -1323,8 +1334,6 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
ocelot_write_rix(ocelot, maxlen, QSYS_PORT_MAX_SDU, port);
ocelot->ops->cut_through_fwd(ocelot);
mutex_unlock(&ocelot->fwd_domain_lock);
}
static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
@ -1351,7 +1360,7 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
break;
}
mutex_lock(&ocelot->tas_lock);
mutex_lock(&ocelot->fwd_domain_lock);
ocelot_rmw_rix(ocelot,
QSYS_TAG_CONFIG_LINK_SPEED(tas_speed),
@ -1361,7 +1370,7 @@ static void vsc9959_sched_speed_set(struct ocelot *ocelot, int port,
if (ocelot_port->taprio)
vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
}
static void vsc9959_new_base_time(struct ocelot *ocelot, ktime_t base_time,
@ -1409,7 +1418,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
int ret, i;
u32 val;
mutex_lock(&ocelot->tas_lock);
mutex_lock(&ocelot->fwd_domain_lock);
if (taprio->cmd == TAPRIO_CMD_DESTROY) {
ocelot_port_mqprio(ocelot, port, &taprio->mqprio);
@ -1421,7 +1430,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
return 0;
} else if (taprio->cmd != TAPRIO_CMD_REPLACE) {
ret = -EOPNOTSUPP;
@ -1504,7 +1513,7 @@ static int vsc9959_qos_port_tas_set(struct ocelot *ocelot, int port,
ocelot_port->taprio = taprio_offload_get(taprio);
vsc9959_tas_guard_bands_update(ocelot, port);
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
return 0;
@ -1512,7 +1521,7 @@ err_reset_tc:
taprio->mqprio.qopt.num_tc = 0;
ocelot_port_mqprio(ocelot, port, &taprio->mqprio);
err_unlock:
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
return ret;
}
@ -1525,7 +1534,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
int port;
u32 val;
mutex_lock(&ocelot->tas_lock);
mutex_lock(&ocelot->fwd_domain_lock);
for (port = 0; port < ocelot->num_phys_ports; port++) {
ocelot_port = ocelot->ports[port];
@ -1563,7 +1572,7 @@ static void vsc9959_tas_clock_adjust(struct ocelot *ocelot)
QSYS_TAG_CONFIG_ENABLE,
QSYS_TAG_CONFIG, port);
}
mutex_unlock(&ocelot->tas_lock);
mutex_unlock(&ocelot->fwd_domain_lock);
}
static int vsc9959_qos_port_cbs_set(struct dsa_switch *ds, int port,
@ -1634,6 +1643,18 @@ static int vsc9959_qos_query_caps(struct tc_query_caps_base *base)
}
}
static int vsc9959_qos_port_mqprio(struct ocelot *ocelot, int port,
struct tc_mqprio_qopt_offload *mqprio)
{
int ret;
mutex_lock(&ocelot->fwd_domain_lock);
ret = ocelot_port_mqprio(ocelot, port, mqprio);
mutex_unlock(&ocelot->fwd_domain_lock);
return ret;
}
static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port,
enum tc_setup_type type,
void *type_data)
@ -1646,7 +1667,7 @@ static int vsc9959_port_setup_tc(struct dsa_switch *ds, int port,
case TC_SETUP_QDISC_TAPRIO:
return vsc9959_qos_port_tas_set(ocelot, port, type_data);
case TC_SETUP_QDISC_MQPRIO:
return ocelot_port_mqprio(ocelot, port, type_data);
return vsc9959_qos_port_mqprio(ocelot, port, type_data);
case TC_SETUP_QDISC_CBS:
return vsc9959_qos_port_cbs_set(ds, port, type_data);
default:
@ -2591,6 +2612,7 @@ static const struct ocelot_ops vsc9959_ops = {
.cut_through_fwd = vsc9959_cut_through_fwd,
.tas_clock_adjust = vsc9959_tas_clock_adjust,
.update_stats = vsc9959_update_stats,
.tas_guard_bands_update = vsc9959_tas_guard_bands_update,
};
static const struct felix_info felix_info_vsc9959 = {
@ -2616,7 +2638,6 @@ static const struct felix_info felix_info_vsc9959 = {
.port_modes = vsc9959_port_modes,
.port_setup_tc = vsc9959_port_setup_tc,
.port_sched_speed_set = vsc9959_sched_speed_set,
.tas_guard_bands_update = vsc9959_tas_guard_bands_update,
};
/* The INTB interrupt is shared between for PTP TX timestamp availability

View file

@ -588,6 +588,9 @@ qca8k_phy_eth_busy_wait(struct qca8k_mgmt_eth_data *mgmt_eth_data,
bool ack;
int ret;
if (!skb)
return -ENOMEM;
reinit_completion(&mgmt_eth_data->rw_done);
/* Increment seq_num and set it in the copy pkt */

View file

@ -35,6 +35,8 @@
#define ENA_REGS_ADMIN_INTR_MASK 1
#define ENA_MAX_BACKOFF_DELAY_EXP 16U
#define ENA_MIN_ADMIN_POLL_US 100
#define ENA_MAX_ADMIN_POLL_US 5000
@ -536,6 +538,7 @@ static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue,
static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us)
{
exp = min_t(u32, exp, ENA_MAX_BACKOFF_DELAY_EXP);
delay_us = max_t(u32, ENA_MIN_ADMIN_POLL_US, delay_us);
delay_us = min_t(u32, delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US);
usleep_range(delay_us, 2 * delay_us);

View file

@ -1492,8 +1492,6 @@ int bgmac_enet_probe(struct bgmac *bgmac)
bgmac->in_init = true;
bgmac_chip_intrs_off(bgmac);
net_dev->irq = bgmac->irq;
SET_NETDEV_DEV(net_dev, bgmac->dev);
dev_set_drvdata(bgmac->dev, bgmac);
@ -1511,6 +1509,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
*/
bgmac_clk_enable(bgmac, 0);
bgmac_chip_intrs_off(bgmac);
/* This seems to be fixing IRQ by assigning OOB #6 to the core */
if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) {
if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6)

View file

@ -355,7 +355,7 @@ struct bufdesc_ex {
#define RX_RING_SIZE (FEC_ENET_RX_FRPPG * FEC_ENET_RX_PAGES)
#define FEC_ENET_TX_FRSIZE 2048
#define FEC_ENET_TX_FRPPG (PAGE_SIZE / FEC_ENET_TX_FRSIZE)
#define TX_RING_SIZE 512 /* Must be power of two */
#define TX_RING_SIZE 1024 /* Must be power of two */
#define TX_RING_MOD_MASK 511 /* for this to work */
#define BD_ENET_RX_INT 0x00800000
@ -544,10 +544,23 @@ enum {
XDP_STATS_TOTAL,
};
enum fec_txbuf_type {
FEC_TXBUF_T_SKB,
FEC_TXBUF_T_XDP_NDO,
};
struct fec_tx_buffer {
union {
struct sk_buff *skb;
struct xdp_frame *xdp;
};
enum fec_txbuf_type type;
};
struct fec_enet_priv_tx_q {
struct bufdesc_prop bd;
unsigned char *tx_bounce[TX_RING_SIZE];
struct sk_buff *tx_skbuff[TX_RING_SIZE];
struct fec_tx_buffer tx_buf[TX_RING_SIZE];
unsigned short tx_stop_threshold;
unsigned short tx_wake_threshold;

View file

@ -397,7 +397,7 @@ static void fec_dump(struct net_device *ndev)
fec16_to_cpu(bdp->cbd_sc),
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
txq->tx_skbuff[index]);
txq->tx_buf[index].skb);
bdp = fec_enet_get_nextdesc(bdp, &txq->bd);
index++;
} while (bdp != txq->bd.base);
@ -654,7 +654,7 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
index = fec_enet_get_bd_index(last_bdp, &txq->bd);
/* Save skb pointer */
txq->tx_skbuff[index] = skb;
txq->tx_buf[index].skb = skb;
/* Make sure the updates to rest of the descriptor are performed before
* transferring ownership.
@ -672,9 +672,7 @@ static int fec_enet_txq_submit_skb(struct fec_enet_priv_tx_q *txq,
skb_tx_timestamp(skb);
/* Make sure the update to bdp and tx_skbuff are performed before
* txq->bd.cur.
*/
/* Make sure the update to bdp is performed before txq->bd.cur. */
wmb();
txq->bd.cur = bdp;
@ -862,7 +860,7 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
}
/* Save skb pointer */
txq->tx_skbuff[index] = skb;
txq->tx_buf[index].skb = skb;
skb_tx_timestamp(skb);
txq->bd.cur = bdp;
@ -952,16 +950,33 @@ static void fec_enet_bd_init(struct net_device *dev)
for (i = 0; i < txq->bd.ring_size; i++) {
/* Initialize the BD for every fragment in the page. */
bdp->cbd_sc = cpu_to_fec16(0);
if (bdp->cbd_bufaddr &&
!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
if (txq->tx_skbuff[i]) {
dev_kfree_skb_any(txq->tx_skbuff[i]);
txq->tx_skbuff[i] = NULL;
if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {
if (bdp->cbd_bufaddr &&
!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
if (txq->tx_buf[i].skb) {
dev_kfree_skb_any(txq->tx_buf[i].skb);
txq->tx_buf[i].skb = NULL;
}
} else {
if (bdp->cbd_bufaddr)
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
if (txq->tx_buf[i].xdp) {
xdp_return_frame(txq->tx_buf[i].xdp);
txq->tx_buf[i].xdp = NULL;
}
/* restore default tx buffer type: FEC_TXBUF_T_SKB */
txq->tx_buf[i].type = FEC_TXBUF_T_SKB;
}
bdp->cbd_bufaddr = cpu_to_fec32(0);
bdp = fec_enet_get_nextdesc(bdp, &txq->bd);
}
@ -1360,6 +1375,7 @@ static void
fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{
struct fec_enet_private *fep;
struct xdp_frame *xdpf;
struct bufdesc *bdp;
unsigned short status;
struct sk_buff *skb;
@ -1387,16 +1403,31 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
index = fec_enet_get_bd_index(bdp, &txq->bd);
skb = txq->tx_skbuff[index];
txq->tx_skbuff[index] = NULL;
if (!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
bdp->cbd_bufaddr = cpu_to_fec32(0);
if (!skb)
goto skb_done;
if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {
skb = txq->tx_buf[index].skb;
txq->tx_buf[index].skb = NULL;
if (bdp->cbd_bufaddr &&
!IS_TSO_HEADER(txq, fec32_to_cpu(bdp->cbd_bufaddr)))
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
bdp->cbd_bufaddr = cpu_to_fec32(0);
if (!skb)
goto tx_buf_done;
} else {
xdpf = txq->tx_buf[index].xdp;
if (bdp->cbd_bufaddr)
dma_unmap_single(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec16_to_cpu(bdp->cbd_datlen),
DMA_TO_DEVICE);
bdp->cbd_bufaddr = cpu_to_fec32(0);
if (!xdpf) {
txq->tx_buf[index].type = FEC_TXBUF_T_SKB;
goto tx_buf_done;
}
}
/* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
@ -1415,21 +1446,11 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
ndev->stats.tx_carrier_errors++;
} else {
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
}
/* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who
* are to time stamp the packet, so we still need to check time
* stamping enabled flag.
*/
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&
fep->hwts_tx_en) &&
fep->bufdesc_ex) {
struct skb_shared_hwtstamps shhwtstamps;
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);
skb_tstamp_tx(skb, &shhwtstamps);
if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB)
ndev->stats.tx_bytes += skb->len;
else
ndev->stats.tx_bytes += xdpf->len;
}
/* Deferred means some collisions occurred during transmit,
@ -1438,10 +1459,32 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
if (status & BD_ENET_TX_DEF)
ndev->stats.collisions++;
/* Free the sk buffer associated with this last transmit */
dev_kfree_skb_any(skb);
skb_done:
/* Make sure the update to bdp and tx_skbuff are performed
if (txq->tx_buf[index].type == FEC_TXBUF_T_SKB) {
/* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who
* are to time stamp the packet, so we still need to check time
* stamping enabled flag.
*/
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS &&
fep->hwts_tx_en) && fep->bufdesc_ex) {
struct skb_shared_hwtstamps shhwtstamps;
struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp;
fec_enet_hwtstamp(fep, fec32_to_cpu(ebdp->ts), &shhwtstamps);
skb_tstamp_tx(skb, &shhwtstamps);
}
/* Free the sk buffer associated with this last transmit */
dev_kfree_skb_any(skb);
} else {
xdp_return_frame(xdpf);
txq->tx_buf[index].xdp = NULL;
/* restore default tx buffer type: FEC_TXBUF_T_SKB */
txq->tx_buf[index].type = FEC_TXBUF_T_SKB;
}
tx_buf_done:
/* Make sure the update to bdp and tx_buf are performed
* before dirty_tx
*/
wmb();
@ -3249,9 +3292,19 @@ static void fec_enet_free_buffers(struct net_device *ndev)
for (i = 0; i < txq->bd.ring_size; i++) {
kfree(txq->tx_bounce[i]);
txq->tx_bounce[i] = NULL;
skb = txq->tx_skbuff[i];
txq->tx_skbuff[i] = NULL;
dev_kfree_skb(skb);
if (txq->tx_buf[i].type == FEC_TXBUF_T_SKB) {
skb = txq->tx_buf[i].skb;
txq->tx_buf[i].skb = NULL;
dev_kfree_skb(skb);
} else {
if (txq->tx_buf[i].xdp) {
xdp_return_frame(txq->tx_buf[i].xdp);
txq->tx_buf[i].xdp = NULL;
}
txq->tx_buf[i].type = FEC_TXBUF_T_SKB;
}
}
}
}
@ -3296,8 +3349,7 @@ static int fec_enet_alloc_queue(struct net_device *ndev)
fep->total_tx_ring_size += fep->tx_queue[i]->bd.ring_size;
txq->tx_stop_threshold = FEC_MAX_SKB_DESCS;
txq->tx_wake_threshold =
(txq->bd.ring_size - txq->tx_stop_threshold) / 2;
txq->tx_wake_threshold = FEC_MAX_SKB_DESCS + 2 * MAX_SKB_FRAGS;
txq->tso_hdrs = dma_alloc_coherent(&fep->pdev->dev,
txq->bd.ring_size * TSO_HEADER_SIZE,
@ -3732,12 +3784,18 @@ static int fec_enet_bpf(struct net_device *dev, struct netdev_bpf *bpf)
if (fep->quirks & FEC_QUIRK_SWAP_FRAME)
return -EOPNOTSUPP;
if (!bpf->prog)
xdp_features_clear_redirect_target(dev);
if (is_run) {
napi_disable(&fep->napi);
netif_tx_disable(dev);
}
old_prog = xchg(&fep->xdp_prog, bpf->prog);
if (old_prog)
bpf_prog_put(old_prog);
fec_restart(dev);
if (is_run) {
@ -3745,8 +3803,8 @@ static int fec_enet_bpf(struct net_device *dev, struct netdev_bpf *bpf)
netif_tx_start_all_queues(dev);
}
if (old_prog)
bpf_prog_put(old_prog);
if (bpf->prog)
xdp_features_set_redirect_target(dev, false);
return 0;
@ -3778,7 +3836,7 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
entries_free = fec_enet_get_free_txdesc_num(txq);
if (entries_free < MAX_SKB_FRAGS + 1) {
netdev_err(fep->netdev, "NOT enough BD for SG!\n");
netdev_err_once(fep->netdev, "NOT enough BD for SG!\n");
return -EBUSY;
}
@ -3811,7 +3869,8 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
ebdp->cbd_esc = cpu_to_fec32(estatus);
}
txq->tx_skbuff[index] = NULL;
txq->tx_buf[index].type = FEC_TXBUF_T_XDP_NDO;
txq->tx_buf[index].xdp = frame;
/* Make sure the updates to rest of the descriptor are performed before
* transferring ownership.
@ -4016,8 +4075,7 @@ static int fec_enet_init(struct net_device *ndev)
if (!(fep->quirks & FEC_QUIRK_SWAP_FRAME))
ndev->xdp_features = NETDEV_XDP_ACT_BASIC |
NETDEV_XDP_ACT_REDIRECT |
NETDEV_XDP_ACT_NDO_XMIT;
NETDEV_XDP_ACT_REDIRECT;
fec_restart(ndev);

View file

@ -964,5 +964,6 @@ void gve_handle_report_stats(struct gve_priv *priv);
/* exported by ethtool.c */
extern const struct ethtool_ops gve_ethtool_ops;
/* needed by ethtool */
extern char gve_driver_name[];
extern const char gve_version_str[];
#endif /* _GVE_H_ */

View file

@ -15,7 +15,7 @@ static void gve_get_drvinfo(struct net_device *netdev,
{
struct gve_priv *priv = netdev_priv(netdev);
strscpy(info->driver, "gve", sizeof(info->driver));
strscpy(info->driver, gve_driver_name, sizeof(info->driver));
strscpy(info->version, gve_version_str, sizeof(info->version));
strscpy(info->bus_info, pci_name(priv->pdev), sizeof(info->bus_info));
}
@ -590,6 +590,9 @@ static int gve_get_link_ksettings(struct net_device *netdev,
err = gve_adminq_report_link_speed(priv);
cmd->base.speed = priv->link_speed;
cmd->base.duplex = DUPLEX_FULL;
return err;
}

View file

@ -33,6 +33,7 @@
#define MIN_TX_TIMEOUT_GAP (1000 * 10)
#define DQO_TX_MAX 0x3FFFF
char gve_driver_name[] = "gve";
const char gve_version_str[] = GVE_VERSION;
static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
@ -2200,7 +2201,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err)
return err;
err = pci_request_regions(pdev, "gvnic-cfg");
err = pci_request_regions(pdev, gve_driver_name);
if (err)
goto abort_with_enabled;
@ -2393,8 +2394,8 @@ static const struct pci_device_id gve_id_table[] = {
{ }
};
static struct pci_driver gvnic_driver = {
.name = "gvnic",
static struct pci_driver gve_driver = {
.name = gve_driver_name,
.id_table = gve_id_table,
.probe = gve_probe,
.remove = gve_remove,
@ -2405,10 +2406,10 @@ static struct pci_driver gvnic_driver = {
#endif
};
module_pci_driver(gvnic_driver);
module_pci_driver(gve_driver);
MODULE_DEVICE_TABLE(pci, gve_id_table);
MODULE_AUTHOR("Google, Inc.");
MODULE_DESCRIPTION("gVNIC Driver");
MODULE_DESCRIPTION("Google Virtual NIC Driver");
MODULE_LICENSE("Dual MIT/GPL");
MODULE_VERSION(GVE_VERSION);

View file

@ -5739,6 +5739,13 @@ ice_set_tx_maxrate(struct net_device *netdev, int queue_index, u32 maxrate)
q_handle = vsi->tx_rings[queue_index]->q_handle;
tc = ice_dcb_get_tc(vsi, queue_index);
vsi = ice_locate_vsi_using_queue(vsi, queue_index);
if (!vsi) {
netdev_err(netdev, "Invalid VSI for given queue %d\n",
queue_index);
return -EINVAL;
}
/* Set BW back to default, when user set maxrate to 0 */
if (!maxrate)
status = ice_cfg_q_bw_dflt_lmt(vsi->port_info, vsi->idx, tc,
@ -7872,10 +7879,10 @@ static int
ice_validate_mqprio_qopt(struct ice_vsi *vsi,
struct tc_mqprio_qopt_offload *mqprio_qopt)
{
u64 sum_max_rate = 0, sum_min_rate = 0;
int non_power_of_2_qcount = 0;
struct ice_pf *pf = vsi->back;
int max_rss_q_cnt = 0;
u64 sum_min_rate = 0;
struct device *dev;
int i, speed;
u8 num_tc;
@ -7891,6 +7898,7 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
dev = ice_pf_to_dev(pf);
vsi->ch_rss_size = 0;
num_tc = mqprio_qopt->qopt.num_tc;
speed = ice_get_link_speed_kbps(vsi);
for (i = 0; num_tc; i++) {
int qcount = mqprio_qopt->qopt.count[i];
@ -7931,7 +7939,6 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
*/
max_rate = mqprio_qopt->max_rate[i];
max_rate = div_u64(max_rate, ICE_BW_KBPS_DIVISOR);
sum_max_rate += max_rate;
/* min_rate is minimum guaranteed rate and it can't be zero */
min_rate = mqprio_qopt->min_rate[i];
@ -7944,6 +7951,12 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
return -EINVAL;
}
if (max_rate && max_rate > speed) {
dev_err(dev, "TC%d: max_rate(%llu Kbps) > link speed of %u Kbps\n",
i, max_rate, speed);
return -EINVAL;
}
iter_div_u64_rem(min_rate, ICE_MIN_BW_LIMIT, &rem);
if (rem) {
dev_err(dev, "TC%d: Min Rate not multiple of %u Kbps",
@ -7981,12 +7994,6 @@ ice_validate_mqprio_qopt(struct ice_vsi *vsi,
(mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i]))
return -EINVAL;
speed = ice_get_link_speed_kbps(vsi);
if (sum_max_rate && sum_max_rate > (u64)speed) {
dev_err(dev, "Invalid max Tx rate(%llu) Kbps > speed(%u) Kbps specified\n",
sum_max_rate, speed);
return -EINVAL;
}
if (sum_min_rate && sum_min_rate > (u64)speed) {
dev_err(dev, "Invalid min Tx rate(%llu) Kbps > speed (%u) Kbps specified\n",
sum_min_rate, speed);

View file

@ -750,17 +750,16 @@ exit:
/**
* ice_locate_vsi_using_queue - locate VSI using queue (forward to queue action)
* @vsi: Pointer to VSI
* @tc_fltr: Pointer to tc_flower_filter
* @queue: Queue index
*
* Locate the VSI using specified queue. When ADQ is not enabled, always
* return input VSI, otherwise locate corresponding VSI based on per channel
* offset and qcount
* Locate the VSI using specified "queue". When ADQ is not enabled,
* always return input VSI, otherwise locate corresponding
* VSI based on per channel "offset" and "qcount"
*/
static struct ice_vsi *
ice_locate_vsi_using_queue(struct ice_vsi *vsi,
struct ice_tc_flower_fltr *tc_fltr)
struct ice_vsi *
ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue)
{
int num_tc, tc, queue;
int num_tc, tc;
/* if ADQ is not active, passed VSI is the candidate VSI */
if (!ice_is_adq_active(vsi->back))
@ -770,7 +769,6 @@ ice_locate_vsi_using_queue(struct ice_vsi *vsi,
* upon queue number)
*/
num_tc = vsi->mqprio_qopt.qopt.num_tc;
queue = tc_fltr->action.fwd.q.queue;
for (tc = 0; tc < num_tc; tc++) {
int qcount = vsi->mqprio_qopt.qopt.count[tc];
@ -812,6 +810,7 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
struct ice_pf *pf = vsi->back;
struct device *dev;
u32 tc_class;
int q;
dev = ice_pf_to_dev(pf);
@ -840,7 +839,8 @@ ice_tc_forward_action(struct ice_vsi *vsi, struct ice_tc_flower_fltr *tc_fltr)
/* Determine destination VSI even though the action is
* FWD_TO_QUEUE, because QUEUE is associated with VSI
*/
dest_vsi = tc_fltr->dest_vsi;
q = tc_fltr->action.fwd.q.queue;
dest_vsi = ice_locate_vsi_using_queue(vsi, q);
break;
default:
dev_err(dev,
@ -1716,7 +1716,7 @@ ice_tc_forward_to_queue(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr,
/* If ADQ is configured, and the queue belongs to ADQ VSI, then prepare
* ADQ switch filter
*/
ch_vsi = ice_locate_vsi_using_queue(vsi, fltr);
ch_vsi = ice_locate_vsi_using_queue(vsi, fltr->action.fwd.q.queue);
if (!ch_vsi)
return -EINVAL;
fltr->dest_vsi = ch_vsi;

View file

@ -204,6 +204,7 @@ static inline int ice_chnl_dmac_fltr_cnt(struct ice_pf *pf)
return pf->num_dmac_chnl_fltrs;
}
struct ice_vsi *ice_locate_vsi_using_queue(struct ice_vsi *vsi, int queue);
int
ice_add_cls_flower(struct net_device *netdev, struct ice_vsi *vsi,
struct flow_cls_offload *cls_flower);

View file

@ -14,6 +14,7 @@
#include <linux/timecounter.h>
#include <linux/net_tstamp.h>
#include <linux/bitfield.h>
#include <linux/hrtimer.h>
#include "igc_hw.h"
@ -101,6 +102,8 @@ struct igc_ring {
u32 start_time;
u32 end_time;
u32 max_sdu;
bool oper_gate_closed; /* Operating gate. True if the TX Queue is closed */
bool admin_gate_closed; /* Future gate. True if the TX Queue will be closed */
/* CBS parameters */
bool cbs_enable; /* indicates if CBS is enabled */
@ -160,6 +163,7 @@ struct igc_adapter {
struct timer_list watchdog_timer;
struct timer_list dma_err_timer;
struct timer_list phy_info_timer;
struct hrtimer hrtimer;
u32 wol;
u32 en_mng_pt;
@ -184,10 +188,13 @@ struct igc_adapter {
u32 max_frame_size;
u32 min_frame_size;
int tc_setup_type;
ktime_t base_time;
ktime_t cycle_time;
bool qbv_enable;
bool taprio_offload_enable;
u32 qbv_config_change_errors;
bool qbv_transition;
unsigned int qbv_count;
/* OS defined structs */
struct pci_dev *pdev;

View file

@ -1708,6 +1708,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
/* twisted pair */
cmd->base.port = PORT_TP;
cmd->base.phy_address = hw->phy.addr;
ethtool_link_ksettings_add_link_mode(cmd, supported, TP);
ethtool_link_ksettings_add_link_mode(cmd, advertising, TP);
/* advertising link modes */
if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF)

View file

@ -711,7 +711,6 @@ static void igc_configure_tx_ring(struct igc_adapter *adapter,
/* disable the queue */
wr32(IGC_TXDCTL(reg_idx), 0);
wrfl();
mdelay(10);
wr32(IGC_TDLEN(reg_idx),
ring->count * sizeof(union igc_adv_tx_desc));
@ -1017,7 +1016,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
ktime_t base_time = adapter->base_time;
ktime_t now = ktime_get_clocktai();
ktime_t baset_est, end_of_cycle;
u32 launchtime;
s32 launchtime;
s64 n;
n = div64_s64(ktime_sub_ns(now, base_time), cycle_time);
@ -1030,7 +1029,7 @@ static __le32 igc_tx_launchtime(struct igc_ring *ring, ktime_t txtime,
*first_flag = true;
ring->last_ff_cycle = baset_est;
if (ktime_compare(txtime, ring->last_tx_cycle) > 0)
if (ktime_compare(end_of_cycle, ring->last_tx_cycle) > 0)
*insert_empty = true;
}
}
@ -1573,16 +1572,12 @@ done:
first->bytecount = skb->len;
first->gso_segs = 1;
if (tx_ring->max_sdu > 0) {
u32 max_sdu = 0;
if (adapter->qbv_transition || tx_ring->oper_gate_closed)
goto out_drop;
max_sdu = tx_ring->max_sdu +
(skb_vlan_tagged(first->skb) ? VLAN_HLEN : 0);
if (first->bytecount > max_sdu) {
adapter->stats.txdrop++;
goto out_drop;
}
if (tx_ring->max_sdu > 0 && first->bytecount > tx_ring->max_sdu) {
adapter->stats.txdrop++;
goto out_drop;
}
if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) &&
@ -3012,8 +3007,8 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget)
time_after(jiffies, tx_buffer->time_stamp +
(adapter->tx_timeout_factor * HZ)) &&
!(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) &&
(rd32(IGC_TDH(tx_ring->reg_idx)) !=
readl(tx_ring->tail))) {
(rd32(IGC_TDH(tx_ring->reg_idx)) != readl(tx_ring->tail)) &&
!tx_ring->oper_gate_closed) {
/* detected Tx unit hang */
netdev_err(tx_ring->netdev,
"Detected Tx Unit Hang\n"
@ -6102,7 +6097,10 @@ static int igc_tsn_clear_schedule(struct igc_adapter *adapter)
adapter->base_time = 0;
adapter->cycle_time = NSEC_PER_SEC;
adapter->taprio_offload_enable = false;
adapter->qbv_config_change_errors = 0;
adapter->qbv_transition = false;
adapter->qbv_count = 0;
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i];
@ -6110,6 +6108,8 @@ static int igc_tsn_clear_schedule(struct igc_adapter *adapter)
ring->start_time = 0;
ring->end_time = NSEC_PER_SEC;
ring->max_sdu = 0;
ring->oper_gate_closed = false;
ring->admin_gate_closed = false;
}
return 0;
@ -6121,27 +6121,20 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
bool queue_configured[IGC_MAX_TX_QUEUES] = { };
struct igc_hw *hw = &adapter->hw;
u32 start_time = 0, end_time = 0;
struct timespec64 now;
size_t n;
int i;
switch (qopt->cmd) {
case TAPRIO_CMD_REPLACE:
adapter->qbv_enable = true;
break;
case TAPRIO_CMD_DESTROY:
adapter->qbv_enable = false;
break;
default:
return -EOPNOTSUPP;
}
if (!adapter->qbv_enable)
if (qopt->cmd == TAPRIO_CMD_DESTROY)
return igc_tsn_clear_schedule(adapter);
if (qopt->cmd != TAPRIO_CMD_REPLACE)
return -EOPNOTSUPP;
if (qopt->base_time < 0)
return -ERANGE;
if (igc_is_device_id_i225(hw) && adapter->base_time)
if (igc_is_device_id_i225(hw) && adapter->taprio_offload_enable)
return -EALREADY;
if (!validate_schedule(adapter, qopt))
@ -6149,6 +6142,9 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
adapter->cycle_time = qopt->cycle_time;
adapter->base_time = qopt->base_time;
adapter->taprio_offload_enable = true;
igc_ptp_read(adapter, &now);
for (n = 0; n < qopt->num_entries; n++) {
struct tc_taprio_sched_entry *e = &qopt->entries[n];
@ -6184,7 +6180,10 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
ring->start_time = start_time;
ring->end_time = end_time;
queue_configured[i] = true;
if (ring->start_time >= adapter->cycle_time)
queue_configured[i] = false;
else
queue_configured[i] = true;
}
start_time += e->interval;
@ -6194,8 +6193,20 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
* If not, set the start and end time to be end time.
*/
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i];
if (!is_base_time_past(qopt->base_time, &now)) {
ring->admin_gate_closed = false;
} else {
ring->oper_gate_closed = false;
ring->admin_gate_closed = false;
}
if (!queue_configured[i]) {
struct igc_ring *ring = adapter->tx_ring[i];
if (!is_base_time_past(qopt->base_time, &now))
ring->admin_gate_closed = true;
else
ring->oper_gate_closed = true;
ring->start_time = end_time;
ring->end_time = end_time;
@ -6207,7 +6218,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
struct net_device *dev = adapter->netdev;
if (qopt->max_sdu[i])
ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len;
ring->max_sdu = qopt->max_sdu[i] + dev->hard_header_len - ETH_TLEN;
else
ring->max_sdu = 0;
}
@ -6327,6 +6338,8 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
{
struct igc_adapter *adapter = netdev_priv(dev);
adapter->tc_setup_type = type;
switch (type) {
case TC_QUERY_CAPS:
return igc_tc_query_caps(adapter, type_data);
@ -6574,6 +6587,27 @@ static const struct xdp_metadata_ops igc_xdp_metadata_ops = {
.xmo_rx_timestamp = igc_xdp_rx_timestamp,
};
static enum hrtimer_restart igc_qbv_scheduling_timer(struct hrtimer *timer)
{
struct igc_adapter *adapter = container_of(timer, struct igc_adapter,
hrtimer);
unsigned int i;
adapter->qbv_transition = true;
for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *tx_ring = adapter->tx_ring[i];
if (tx_ring->admin_gate_closed) {
tx_ring->admin_gate_closed = false;
tx_ring->oper_gate_closed = true;
} else {
tx_ring->oper_gate_closed = false;
}
}
adapter->qbv_transition = false;
return HRTIMER_NORESTART;
}
/**
* igc_probe - Device Initialization Routine
* @pdev: PCI device information struct
@ -6752,6 +6786,9 @@ static int igc_probe(struct pci_dev *pdev,
INIT_WORK(&adapter->reset_task, igc_reset_task);
INIT_WORK(&adapter->watchdog_task, igc_watchdog_task);
hrtimer_init(&adapter->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
adapter->hrtimer.function = &igc_qbv_scheduling_timer;
/* Initialize link properties that are user-changeable */
adapter->fc_autoneg = true;
hw->mac.autoneg = true;
@ -6855,6 +6892,7 @@ static void igc_remove(struct pci_dev *pdev)
cancel_work_sync(&adapter->reset_task);
cancel_work_sync(&adapter->watchdog_task);
hrtimer_cancel(&adapter->hrtimer);
/* Release control of h/w to f/w. If f/w is AMT enabled, this
* would have already happened in close and is redundant.

View file

@ -356,16 +356,35 @@ static int igc_ptp_feature_enable_i225(struct ptp_clock_info *ptp,
tsim &= ~IGC_TSICR_TT0;
}
if (on) {
struct timespec64 safe_start;
int i = rq->perout.index;
igc_pin_perout(igc, i, pin, use_freq);
igc->perout[i].start.tv_sec = rq->perout.start.sec;
igc_ptp_read(igc, &safe_start);
/* PPS output start time is triggered by Target time(TT)
* register. Programming any past time value into TT
* register will cause PPS to never start. Need to make
* sure we program the TT register a time ahead in
* future. There isn't a stringent need to fire PPS out
* right away. Adding +2 seconds should take care of
* corner cases. Let's say if the SYSTIML is close to
* wrap up and the timer keeps ticking as we program the
* register, adding +2seconds is safe bet.
*/
safe_start.tv_sec += 2;
if (rq->perout.start.sec < safe_start.tv_sec)
igc->perout[i].start.tv_sec = safe_start.tv_sec;
else
igc->perout[i].start.tv_sec = rq->perout.start.sec;
igc->perout[i].start.tv_nsec = rq->perout.start.nsec;
igc->perout[i].period.tv_sec = ts.tv_sec;
igc->perout[i].period.tv_nsec = ts.tv_nsec;
wr32(trgttimh, rq->perout.start.sec);
wr32(trgttimh, (u32)igc->perout[i].start.tv_sec);
/* For now, always select timer 0 as source. */
wr32(trgttiml, rq->perout.start.nsec | IGC_TT_IO_TIMER_SEL_SYSTIM0);
wr32(trgttiml, (u32)(igc->perout[i].start.tv_nsec |
IGC_TT_IO_TIMER_SEL_SYSTIM0));
if (use_freq)
wr32(freqout, ns);
tsauxc |= tsauxc_mask;

View file

@ -37,7 +37,7 @@ static unsigned int igc_tsn_new_flags(struct igc_adapter *adapter)
{
unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED;
if (adapter->qbv_enable)
if (adapter->taprio_offload_enable)
new_flags |= IGC_FLAG_TSN_QBV_ENABLED;
if (is_any_launchtime(adapter))
@ -114,7 +114,6 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
static int igc_tsn_enable_offload(struct igc_adapter *adapter)
{
struct igc_hw *hw = &adapter->hw;
bool tsn_mode_reconfig = false;
u32 tqavctrl, baset_l, baset_h;
u32 sec, nsec, cycle;
ktime_t base_time, systim;
@ -133,8 +132,28 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
wr32(IGC_STQT(i), ring->start_time);
wr32(IGC_ENDQT(i), ring->end_time);
txqctl |= IGC_TXQCTL_STRICT_CYCLE |
IGC_TXQCTL_STRICT_END;
if (adapter->taprio_offload_enable) {
/* If taprio_offload_enable is set we are in "taprio"
* mode and we need to be strict about the
* cycles: only transmit a packet if it can be
* completed during that cycle.
*
* If taprio_offload_enable is NOT true when
* enabling TSN offload, the cycle should have
* no external effects, but is only used internally
* to adapt the base time register after a second
* has passed.
*
* Enabling strict mode in this case would
* unnecessarily prevent the transmission of
* certain packets (i.e. at the boundary of a
* second) and thus interfere with the launchtime
* feature that promises transmission at a
* certain point in time.
*/
txqctl |= IGC_TXQCTL_STRICT_CYCLE |
IGC_TXQCTL_STRICT_END;
}
if (ring->launchtime_enable)
txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT;
@ -228,11 +247,10 @@ skip_cbs:
tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;
if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN)
tsn_mode_reconfig = true;
tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
adapter->qbv_count++;
cycle = adapter->cycle_time;
base_time = adapter->base_time;
@ -249,17 +267,29 @@ skip_cbs:
* Gate Control List (GCL) is running.
*/
if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
tsn_mode_reconfig)
(adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) &&
(adapter->qbv_count > 1))
adapter->qbv_config_change_errors++;
} else {
/* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
* has to be configured before the cycle time and base time.
* Tx won't hang if there is a GCL is already running,
* so in this case we don't need to set FutScdDis.
*/
if (igc_is_device_id_i226(hw) &&
!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))
tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;
if (igc_is_device_id_i226(hw)) {
ktime_t adjust_time, expires_time;
/* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
* has to be configured before the cycle time and base time.
* Tx won't hang if a GCL is already running,
* so in this case we don't need to set FutScdDis.
*/
if (!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))
tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;
nsec = rd32(IGC_SYSTIML);
sec = rd32(IGC_SYSTIMH);
systim = ktime_set(sec, nsec);
adjust_time = adapter->base_time;
expires_time = ktime_sub_ns(adjust_time, systim);
hrtimer_start(&adapter->hrtimer, expires_time, HRTIMER_MODE_REL);
}
}
wr32(IGC_TQAVCTRL, tqavctrl);
@ -305,7 +335,11 @@ int igc_tsn_offload_apply(struct igc_adapter *adapter)
{
struct igc_hw *hw = &adapter->hw;
if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) {
/* Per I225/6 HW Design Section 7.5.2.1, transmit mode
* cannot be changed dynamically. Require reset the adapter.
*/
if (netif_running(adapter->netdev) &&
(igc_is_device_id_i225(hw) || !adapter->qbv_count)) {
schedule_work(&adapter->reset_task);
return 0;
}

View file

@ -1511,7 +1511,7 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
*/
if (txq_number == 1)
txq_map = (cpu == pp->rxq_def) ?
MVNETA_CPU_TXQ_ACCESS(1) : 0;
MVNETA_CPU_TXQ_ACCESS(0) : 0;
} else {
txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
@ -4356,7 +4356,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
*/
if (txq_number == 1)
txq_map = (cpu == elected_cpu) ?
MVNETA_CPU_TXQ_ACCESS(1) : 0;
MVNETA_CPU_TXQ_ACCESS(0) : 0;
else
txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) &
MVNETA_CPU_TXQ_ACCESS_ALL_MASK;

View file

@ -208,7 +208,7 @@ struct ptp *ptp_get(void)
/* Check driver is bound to PTP block */
if (!ptp)
ptp = ERR_PTR(-EPROBE_DEFER);
else
else if (!IS_ERR(ptp))
pci_dev_get(ptp->pdev);
return ptp;
@ -388,11 +388,10 @@ static int ptp_extts_on(struct ptp *ptp, int on)
static int ptp_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
struct device *dev = &pdev->dev;
struct ptp *ptp;
int err;
ptp = devm_kzalloc(dev, sizeof(*ptp), GFP_KERNEL);
ptp = kzalloc(sizeof(*ptp), GFP_KERNEL);
if (!ptp) {
err = -ENOMEM;
goto error;
@ -428,20 +427,19 @@ static int ptp_probe(struct pci_dev *pdev,
return 0;
error_free:
devm_kfree(dev, ptp);
kfree(ptp);
error:
/* For `ptp_get()` we need to differentiate between the case
* when the core has not tried to probe this device and the case when
* the probe failed. In the later case we pretend that the
* initialization was successful and keep the error in
* the probe failed. In the later case we keep the error in
* `dev->driver_data`.
*/
pci_set_drvdata(pdev, ERR_PTR(err));
if (!first_ptp_block)
first_ptp_block = ERR_PTR(err);
return 0;
return err;
}
static void ptp_remove(struct pci_dev *pdev)
@ -449,16 +447,17 @@ static void ptp_remove(struct pci_dev *pdev)
struct ptp *ptp = pci_get_drvdata(pdev);
u64 clock_cfg;
if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))
hrtimer_cancel(&ptp->hrtimer);
if (IS_ERR_OR_NULL(ptp))
return;
if (cn10k_ptp_errata(ptp) && hrtimer_active(&ptp->hrtimer))
hrtimer_cancel(&ptp->hrtimer);
/* Disable PTP clock */
clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG);
clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN;
writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
kfree(ptp);
}
static const struct pci_device_id ptp_id_table[] = {

View file

@ -3252,7 +3252,7 @@ static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
rvu->ptp = ptp_get();
if (IS_ERR(rvu->ptp)) {
err = PTR_ERR(rvu->ptp);
if (err == -EPROBE_DEFER)
if (err)
goto err_release_regions;
rvu->ptp = NULL;
}

View file

@ -4069,21 +4069,14 @@ int rvu_mbox_handler_nix_set_rx_mode(struct rvu *rvu, struct nix_rx_mode *req,
}
/* install/uninstall promisc entry */
if (promisc) {
if (promisc)
rvu_npc_install_promisc_entry(rvu, pcifunc, nixlf,
pfvf->rx_chan_base,
pfvf->rx_chan_cnt);
if (rvu_npc_exact_has_match_table(rvu))
rvu_npc_exact_promisc_enable(rvu, pcifunc);
} else {
else
if (!nix_rx_multicast)
rvu_npc_enable_promisc_entry(rvu, pcifunc, nixlf, false);
if (rvu_npc_exact_has_match_table(rvu))
rvu_npc_exact_promisc_disable(rvu, pcifunc);
}
return 0;
}

View file

@ -1164,8 +1164,10 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
{
struct npc_exact_table *table;
u16 *cnt, old_cnt;
bool promisc;
table = rvu->hw->table;
promisc = table->promisc_mode[drop_mcam_idx];
cnt = &table->cnt_cmd_rules[drop_mcam_idx];
old_cnt = *cnt;
@ -1177,13 +1179,18 @@ static u16 __rvu_npc_exact_cmd_rules_cnt_update(struct rvu *rvu, int drop_mcam_i
*enable_or_disable_cam = false;
/* If all rules are deleted, disable cam */
if (promisc)
goto done;
/* If all rules are deleted and not already in promisc mode;
* disable cam
*/
if (!*cnt && val < 0) {
*enable_or_disable_cam = true;
goto done;
}
/* If rule got added, enable cam */
/* If rule got added and not already in promisc mode; enable cam */
if (!old_cnt && val > 0) {
*enable_or_disable_cam = true;
goto done;
@ -1462,6 +1469,12 @@ int rvu_npc_exact_promisc_disable(struct rvu *rvu, u16 pcifunc)
*promisc = false;
mutex_unlock(&table->lock);
/* Enable drop rule */
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX,
true);
dev_dbg(rvu->dev, "%s: disabled promisc mode (cgx=%d lmac=%d)\n",
__func__, cgx_id, lmac_id);
return 0;
}
@ -1503,6 +1516,12 @@ int rvu_npc_exact_promisc_enable(struct rvu *rvu, u16 pcifunc)
*promisc = true;
mutex_unlock(&table->lock);
/* disable drop rule */
rvu_npc_enable_mcam_by_entry_index(rvu, drop_mcam_idx, NIX_INTF_RX,
false);
dev_dbg(rvu->dev, "%s: Enabled promisc mode (cgx=%d lmac=%d)\n",
__func__, cgx_id, lmac_id);
return 0;
}

View file

@ -872,6 +872,14 @@ static int otx2_prepare_flow_request(struct ethtool_rx_flow_spec *fsp,
return -EINVAL;
vlan_etype = be16_to_cpu(fsp->h_ext.vlan_etype);
/* Drop rule with vlan_etype == 802.1Q
* and vlan_id == 0 is not supported
*/
if (vlan_etype == ETH_P_8021Q && !fsp->m_ext.vlan_tci &&
fsp->ring_cookie == RX_CLS_FLOW_DISC)
return -EINVAL;
/* Only ETH_P_8021Q and ETH_P_802AD types supported */
if (vlan_etype != ETH_P_8021Q &&
vlan_etype != ETH_P_8021AD)

View file

@ -597,6 +597,21 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
return -EOPNOTSUPP;
}
if (!match.mask->vlan_id) {
struct flow_action_entry *act;
int i;
flow_action_for_each(i, act, &rule->action) {
if (act->id == FLOW_ACTION_DROP) {
netdev_err(nic->netdev,
"vlan tpid 0x%x with vlan_id %d is not supported for DROP rule.\n",
ntohs(match.key->vlan_tpid),
match.key->vlan_id);
return -EOPNOTSUPP;
}
}
}
if (match.mask->vlan_id ||
match.mask->vlan_dei ||
match.mask->vlan_priority) {

View file

@ -594,7 +594,7 @@ int mlx5e_fs_tt_redirect_any_create(struct mlx5e_flow_steering *fs)
err = fs_any_create_table(fs);
if (err)
return err;
goto err_free_any;
err = fs_any_enable(fs);
if (err)
@ -606,8 +606,8 @@ int mlx5e_fs_tt_redirect_any_create(struct mlx5e_flow_steering *fs)
err_destroy_table:
fs_any_destroy_table(fs_any);
kfree(fs_any);
err_free_any:
mlx5e_fs_set_any(fs, NULL);
kfree(fs_any);
return err;
}

View file

@ -729,8 +729,10 @@ int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params,
c = kvzalloc_node(sizeof(*c), GFP_KERNEL, dev_to_node(mlx5_core_dma_dev(mdev)));
cparams = kvzalloc(sizeof(*cparams), GFP_KERNEL);
if (!c || !cparams)
return -ENOMEM;
if (!c || !cparams) {
err = -ENOMEM;
goto err_free;
}
c->priv = priv;
c->mdev = priv->mdev;

View file

@ -1545,7 +1545,8 @@ mlx5_tc_ct_parse_action(struct mlx5_tc_ct_priv *priv,
attr->ct_attr.ct_action |= act->ct.action; /* So we can have clear + ct */
attr->ct_attr.zone = act->ct.zone;
attr->ct_attr.nf_ft = act->ct.flow_table;
if (!(act->ct.action & TCA_CT_ACT_CLEAR))
attr->ct_attr.nf_ft = act->ct.flow_table;
attr->ct_attr.act_miss_cookie = act->miss_cookie;
return 0;
@ -1990,6 +1991,9 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
if (!priv)
return -EOPNOTSUPP;
if (attr->ct_attr.offloaded)
return 0;
if (attr->ct_attr.ct_action & TCA_CT_ACT_CLEAR) {
err = mlx5_tc_ct_entry_set_registers(priv, &attr->parse_attr->mod_hdr_acts,
0, 0, 0, 0);
@ -1999,11 +2003,15 @@ mlx5_tc_ct_flow_offload(struct mlx5_tc_ct_priv *priv, struct mlx5_flow_attr *att
attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR;
}
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
if (!attr->ct_attr.nf_ft) { /* means only ct clear action, and not ct_clear,ct() */
attr->ct_attr.offloaded = true;
return 0;
}
mutex_lock(&priv->control_lock);
err = __mlx5_tc_ct_flow_offload(priv, attr);
if (!err)
attr->ct_attr.offloaded = true;
mutex_unlock(&priv->control_lock);
return err;
@ -2021,7 +2029,7 @@ void
mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv,
struct mlx5_flow_attr *attr)
{
if (!attr->ct_attr.ft) /* no ct action, return */
if (!attr->ct_attr.offloaded) /* no ct action, return */
return;
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
return;

View file

@ -29,6 +29,7 @@ struct mlx5_ct_attr {
u32 ct_labels_id;
u32 act_miss_mapping;
u64 act_miss_cookie;
bool offloaded;
struct mlx5_ct_ft *ft;
};

View file

@ -662,8 +662,7 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq,
/* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
* as we know this is a page_pool page.
*/
page_pool_put_defragged_page(page->pp,
page, -1, true);
page_pool_recycle_direct(page->pp, page);
} while (++n < num);
break;

View file

@ -190,6 +190,7 @@ static int accel_fs_tcp_create_groups(struct mlx5e_flow_table *ft,
in = kvzalloc(inlen, GFP_KERNEL);
if (!in || !ft->g) {
kfree(ft->g);
ft->g = NULL;
kvfree(in);
return -ENOMEM;
}

View file

@ -390,10 +390,18 @@ static void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix)
{
struct mlx5e_wqe_frag_info *wi = get_frag(rq, ix);
if (rq->xsk_pool)
if (rq->xsk_pool) {
mlx5e_xsk_free_rx_wqe(wi);
else
} else {
mlx5e_free_rx_wqe(rq, wi);
/* Avoid a second release of the wqe pages: dealloc is called
* for the same missing wqes on regular RQ flush and on regular
* RQ close. This happens when XSK RQs come into play.
*/
for (int i = 0; i < rq->wqe.info.num_frags; i++, wi++)
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
}
static void mlx5e_xsk_free_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk)
@ -1743,11 +1751,11 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
prog = rcu_dereference(rq->xdp_prog);
if (prog && mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
struct mlx5e_wqe_frag_info *pwi;
for (pwi = head_wi; pwi < wi; pwi++)
pwi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
pwi->frag_page->frags++;
}
return NULL; /* page/packet was consumed by XDP */
}
@ -1817,12 +1825,8 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
rq, wi, cqe, cqe_bcnt);
if (!skb) {
/* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
/* do not return page to cache,
* it will be returned on XDP_TX completion.
*/
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
wi->frag_page->frags++;
goto wq_cyc_pop;
}
@ -1868,12 +1872,8 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
rq, wi, cqe, cqe_bcnt);
if (!skb) {
/* probably for XDP */
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
/* do not return page to cache,
* it will be returned on XDP_TX completion.
*/
wi->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE);
}
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
wi->frag_page->frags++;
goto wq_cyc_pop;
}
@ -2052,12 +2052,12 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
if (prog) {
if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
int i;
struct mlx5e_frag_page *pfp;
for (i = 0; i < sinfo->nr_frags; i++)
/* non-atomic */
__set_bit(page_idx + i, wi->skip_release_bitmap);
return NULL;
for (pfp = head_page; pfp < frag_page; pfp++)
pfp->frags++;
wi->linear_page.frags++;
}
mlx5e_page_release_fragmented(rq, &wi->linear_page);
return NULL; /* page/packet was consumed by XDP */
@ -2155,7 +2155,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
cqe_bcnt, &mxbuf);
if (mlx5e_xdp_handle(rq, prog, &mxbuf)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
__set_bit(page_idx, wi->skip_release_bitmap); /* non-atomic */
frag_page->frags++;
return NULL; /* page/packet was consumed by XDP */
}

View file

@ -1639,7 +1639,8 @@ static void remove_unready_flow(struct mlx5e_tc_flow *flow)
uplink_priv = &rpriv->uplink_priv;
mutex_lock(&uplink_priv->unready_flows_lock);
unready_flow_del(flow);
if (flow_flag_test(flow, NOT_READY))
unready_flow_del(flow);
mutex_unlock(&uplink_priv->unready_flows_lock);
}
@ -1932,8 +1933,7 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
esw_attr = attr->esw_attr;
mlx5e_put_flow_tunnel_id(flow);
if (flow_flag_test(flow, NOT_READY))
remove_unready_flow(flow);
remove_unready_flow(flow);
if (mlx5e_is_offloaded_flow(flow)) {
if (flow_flag_test(flow, SLOW))

View file

@ -807,6 +807,9 @@ static int mlx5_esw_vport_caps_get(struct mlx5_eswitch *esw, struct mlx5_vport *
hca_caps = MLX5_ADDR_OF(query_hca_cap_out, query_ctx, capability);
vport->info.roce_enabled = MLX5_GET(cmd_hca_cap, hca_caps, roce);
if (!MLX5_CAP_GEN_MAX(esw->dev, hca_cap_2))
goto out_free;
memset(query_ctx, 0, query_out_sz);
err = mlx5_vport_get_other_func_cap(esw->dev, vport->vport, query_ctx,
MLX5_CAP_GENERAL_2);

View file

@ -68,14 +68,19 @@ static struct thermal_zone_device_ops mlx5_thermal_ops = {
int mlx5_thermal_init(struct mlx5_core_dev *mdev)
{
char data[THERMAL_NAME_LENGTH];
struct mlx5_thermal *thermal;
struct thermal_zone_device *tzd;
const char *data = "mlx5";
int err;
tzd = thermal_zone_get_zone_by_name(data);
if (!IS_ERR(tzd))
if (!mlx5_core_is_pf(mdev) && !mlx5_core_is_ecpf(mdev))
return 0;
err = snprintf(data, sizeof(data), "mlx5_%s", dev_name(mdev->device));
if (err < 0 || err >= sizeof(data)) {
mlx5_core_err(mdev, "Failed to setup thermal zone name, %d\n", err);
return -EINVAL;
}
thermal = kzalloc(sizeof(*thermal), GFP_KERNEL);
if (!thermal)
return -ENOMEM;
@ -89,10 +94,10 @@ int mlx5_thermal_init(struct mlx5_core_dev *mdev)
&mlx5_thermal_ops,
NULL, 0, MLX5_THERMAL_POLL_INT_MSEC);
if (IS_ERR(thermal->tzdev)) {
dev_err(mdev->device, "Failed to register thermal zone device (%s) %ld\n",
data, PTR_ERR(thermal->tzdev));
err = PTR_ERR(thermal->tzdev);
mlx5_core_err(mdev, "Failed to register thermal zone device (%s) %d\n", data, err);
kfree(thermal);
return -EINVAL;
return err;
}
mdev->thermal = thermal;

View file

@ -46,7 +46,7 @@ config LAN743X
tristate "LAN743x support"
depends on PCI
depends on PTP_1588_CLOCK_OPTIONAL
select PHYLIB
select FIXED_PHY
select CRC16
select CRC32
help

View file

@ -2927,7 +2927,6 @@ int ocelot_init(struct ocelot *ocelot)
mutex_init(&ocelot->mact_lock);
mutex_init(&ocelot->fwd_domain_lock);
mutex_init(&ocelot->tas_lock);
spin_lock_init(&ocelot->ptp_clock_lock);
spin_lock_init(&ocelot->ts_id_lock);

View file

@ -67,10 +67,13 @@ void ocelot_port_update_active_preemptible_tcs(struct ocelot *ocelot, int port)
val = mm->preemptible_tcs;
/* Cut through switching doesn't work for preemptible priorities,
* so first make sure it is disabled.
* so first make sure it is disabled. Also, changing the preemptible
* TCs affects the oversized frame dropping logic, so that needs to be
* re-triggered. And since tas_guard_bands_update() also implicitly
* calls cut_through_fwd(), we don't need to explicitly call it.
*/
mm->active_preemptible_tcs = val;
ocelot->ops->cut_through_fwd(ocelot);
ocelot->ops->tas_guard_bands_update(ocelot, port);
dev_dbg(ocelot->dev,
"port %d %s/%s, MM TX %s, preemptible TCs 0x%x, active 0x%x\n",
@ -89,17 +92,14 @@ void ocelot_port_change_fp(struct ocelot *ocelot, int port,
{
struct ocelot_mm_state *mm = &ocelot->mm[port];
mutex_lock(&ocelot->fwd_domain_lock);
lockdep_assert_held(&ocelot->fwd_domain_lock);
if (mm->preemptible_tcs == preemptible_tcs)
goto out_unlock;
return;
mm->preemptible_tcs = preemptible_tcs;
ocelot_port_update_active_preemptible_tcs(ocelot, port);
out_unlock:
mutex_unlock(&ocelot->fwd_domain_lock);
}
static void ocelot_mm_update_port_status(struct ocelot *ocelot, int port)

View file

@ -353,12 +353,6 @@ err_out_reset:
ionic_reset(ionic);
err_out_teardown:
ionic_dev_teardown(ionic);
pci_clear_master(pdev);
/* Don't fail the probe for these errors, keep
* the hw interface around for inspection
*/
return 0;
err_out_unmap_bars:
ionic_unmap_bars(ionic);
err_out_pci_release_regions:

View file

@ -475,11 +475,6 @@ static void ionic_qcqs_free(struct ionic_lif *lif)
static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq,
struct ionic_qcq *n_qcq)
{
if (WARN_ON(n_qcq->flags & IONIC_QCQ_F_INTR)) {
ionic_intr_free(n_qcq->cq.lif->ionic, n_qcq->intr.index);
n_qcq->flags &= ~IONIC_QCQ_F_INTR;
}
n_qcq->intr.vector = src_qcq->intr.vector;
n_qcq->intr.index = src_qcq->intr.index;
n_qcq->napi_qcq = src_qcq->napi_qcq;

View file

@ -186,9 +186,6 @@ static int txgbe_calc_eeprom_checksum(struct wx *wx, u16 *checksum)
if (eeprom_ptrs)
kvfree(eeprom_ptrs);
if (*checksum > TXGBE_EEPROM_SUM)
return -EINVAL;
*checksum = TXGBE_EEPROM_SUM - *checksum;
return 0;

View file

@ -184,13 +184,10 @@ static ssize_t nsim_dev_trap_fa_cookie_write(struct file *file,
cookie_len = (count - 1) / 2;
if ((count - 1) % 2)
return -EINVAL;
buf = kmalloc(count, GFP_KERNEL | __GFP_NOWARN);
if (!buf)
return -ENOMEM;
ret = simple_write_to_buffer(buf, count, ppos, data, count);
if (ret < 0)
goto free_buf;
buf = memdup_user(data, count);
if (IS_ERR(buf))
return PTR_ERR(buf);
fa_cookie = kmalloc(sizeof(*fa_cookie) + cookie_len,
GFP_KERNEL | __GFP_NOWARN);

View file

@ -6157,8 +6157,11 @@ static int airo_get_rate(struct net_device *dev,
struct iw_param *vwrq = &wrqu->bitrate;
struct airo_info *local = dev->ml_priv;
StatusRid status_rid; /* Card status info */
int ret;
readStatusRid(local, &status_rid, 1);
ret = readStatusRid(local, &status_rid, 1);
if (ret)
return -EBUSY;
vwrq->value = le16_to_cpu(status_rid.currentXmitRate) * 500000;
/* If more than one rate, set auto */

View file

@ -84,7 +84,6 @@ const struct iwl_ht_params iwl_22000_ht_params = {
.mac_addr_from_csr = 0x380, \
.ht_params = &iwl_22000_ht_params, \
.nvm_ver = IWL_22000_NVM_VERSION, \
.trans.use_tfh = true, \
.trans.rf_id = true, \
.trans.gen2 = true, \
.nvm_type = IWL_NVM_EXT, \
@ -122,7 +121,6 @@ const struct iwl_ht_params iwl_22000_ht_params = {
const struct iwl_cfg_trans_params iwl_qu_trans_cfg = {
.mq_rx_supported = true,
.use_tfh = true,
.rf_id = true,
.gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000,
@ -134,7 +132,6 @@ const struct iwl_cfg_trans_params iwl_qu_trans_cfg = {
const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = {
.mq_rx_supported = true,
.use_tfh = true,
.rf_id = true,
.gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000,
@ -146,7 +143,6 @@ const struct iwl_cfg_trans_params iwl_qu_medium_latency_trans_cfg = {
const struct iwl_cfg_trans_params iwl_qu_long_latency_trans_cfg = {
.mq_rx_supported = true,
.use_tfh = true,
.rf_id = true,
.gen2 = true,
.device_family = IWL_DEVICE_FAMILY_22000,
@ -200,7 +196,6 @@ const struct iwl_cfg_trans_params iwl_ax200_trans_cfg = {
.device_family = IWL_DEVICE_FAMILY_22000,
.base_params = &iwl_22000_base_params,
.mq_rx_supported = true,
.use_tfh = true,
.rf_id = true,
.gen2 = true,
.bisr_workaround = 1,

View file

@ -256,7 +256,6 @@ enum iwl_cfg_trans_ltr_delay {
* @xtal_latency: power up latency to get the xtal stabilized
* @extra_phy_cfg_flags: extra configuration flags to pass to the PHY
* @rf_id: need to read rf_id to determine the firmware image
* @use_tfh: use TFH
* @gen2: 22000 and on transport operation
* @mq_rx_supported: multi-queue rx support
* @integrated: discrete or integrated
@ -271,7 +270,6 @@ struct iwl_cfg_trans_params {
u32 xtal_latency;
u32 extra_phy_cfg_flags;
u32 rf_id:1,
use_tfh:1,
gen2:1,
mq_rx_supported:1,
integrated:1,

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2005-2014, 2018-2021 Intel Corporation
* Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_fh_h__
@ -71,7 +71,7 @@
static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans,
unsigned int chnl)
{
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
WARN_ON_ONCE(chnl >= 64);
return TFH_TFDQ_CBB_TABLE + 8 * chnl;
}

View file

@ -2,7 +2,7 @@
/*
* Copyright (C) 2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
* Copyright (C) 2019-2021 Intel Corporation
* Copyright (C) 2019-2021, 2023 Intel Corporation
*/
#include <linux/kernel.h>
#include <linux/bsearch.h>
@ -42,7 +42,7 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
trans->txqs.tfd.addr_size = 64;
trans->txqs.tfd.max_tbs = IWL_TFH_NUM_TBS;
trans->txqs.tfd.size = sizeof(struct iwl_tfh_tfd);
@ -101,7 +101,7 @@ int iwl_trans_init(struct iwl_trans *trans)
/* Some things must not change even if the config does */
WARN_ON(trans->txqs.tfd.addr_size !=
(trans->trans_cfg->use_tfh ? 64 : 36));
(trans->trans_cfg->gen2 ? 64 : 36));
snprintf(trans->dev_cmd_pool_name, sizeof(trans->dev_cmd_pool_name),
"iwl_cmd_pool:%s", dev_name(trans->dev));

View file

@ -1450,7 +1450,7 @@ static inline bool iwl_mvm_has_new_station_api(const struct iwl_fw *fw)
static inline bool iwl_mvm_has_new_tx_api(struct iwl_mvm *mvm)
{
/* TODO - replace with TLV once defined */
return mvm->trans->trans_cfg->use_tfh;
return mvm->trans->trans_cfg->gen2;
}
static inline bool iwl_mvm_has_unified_ucode(struct iwl_mvm *mvm)

View file

@ -819,7 +819,7 @@ static int iwl_pcie_load_cpu_sections_8000(struct iwl_trans *trans,
iwl_enable_interrupts(trans);
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
if (cpu == 1)
iwl_write_prph(trans, UREG_UCODE_LOAD_STATUS,
0xFFFF);
@ -3394,7 +3394,7 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans,
u8 tfdidx;
u32 caplen, cmdlen;
if (trans->trans_cfg->use_tfh)
if (trans->trans_cfg->gen2)
tfdidx = idx;
else
tfdidx = ptr;

View file

@ -364,7 +364,7 @@ void iwl_trans_pcie_tx_reset(struct iwl_trans *trans)
for (txq_id = 0; txq_id < trans->trans_cfg->base_params->num_of_queues;
txq_id++) {
struct iwl_txq *txq = trans->txqs.txq[txq_id];
if (trans->trans_cfg->use_tfh)
if (trans->trans_cfg->gen2)
iwl_write_direct64(trans,
FH_MEM_CBBC_QUEUE(trans, txq_id),
txq->dma_addr);

View file

@ -985,7 +985,7 @@ void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq)
bool active;
u8 fifo;
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id,
txq->read_ptr, txq->write_ptr);
/* TODO: access new SCD registers and dump them */
@ -1040,7 +1040,7 @@ int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
if (WARN_ON(txq->entries || txq->tfds))
return -EINVAL;
if (trans->trans_cfg->use_tfh)
if (trans->trans_cfg->gen2)
tfd_sz = trans->txqs.tfd.size * slots_num;
timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0);
@ -1347,7 +1347,7 @@ static inline dma_addr_t iwl_txq_gen1_tfd_tb_get_addr(struct iwl_trans *trans,
dma_addr_t addr;
dma_addr_t hi_len;
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd;
struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx];
@ -1408,7 +1408,7 @@ void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans,
meta->tbs = 0;
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfd_fh = (void *)tfd;
tfd_fh->num_tbs = 0;
@ -1625,7 +1625,7 @@ void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
txq->entries[read_ptr].skb = NULL;
if (!trans->trans_cfg->use_tfh)
if (!trans->trans_cfg->gen2)
iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq);
iwl_txq_free_tfd(trans, txq);

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2020-2022 Intel Corporation
* Copyright (C) 2020-2023 Intel Corporation
*/
#ifndef __iwl_trans_queue_tx_h__
#define __iwl_trans_queue_tx_h__
@ -38,7 +38,7 @@ static inline void iwl_wake_queue(struct iwl_trans *trans,
static inline void *iwl_txq_get_tfd(struct iwl_trans *trans,
struct iwl_txq *txq, int idx)
{
if (trans->trans_cfg->use_tfh)
if (trans->trans_cfg->gen2)
idx = iwl_txq_get_cmd_index(txq, idx);
return (u8 *)txq->tfds + trans->txqs.tfd.size * idx;
@ -135,7 +135,7 @@ static inline u8 iwl_txq_gen1_tfd_get_num_tbs(struct iwl_trans *trans,
{
struct iwl_tfd *tfd;
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd;
return le16_to_cpu(tfh_tfd->num_tbs) & 0x1f;
@ -151,7 +151,7 @@ static inline u16 iwl_txq_gen1_tfd_tb_get_len(struct iwl_trans *trans,
struct iwl_tfd *tfd;
struct iwl_tfd_tb *tb;
if (trans->trans_cfg->use_tfh) {
if (trans->trans_cfg->gen2) {
struct iwl_tfh_tfd *tfh_tfd = _tfd;
struct iwl_tfh_tb *tfh_tb = &tfh_tfd->tbs[idx];

View file

@ -231,10 +231,6 @@ int mt7921_dma_init(struct mt7921_dev *dev)
if (ret)
return ret;
ret = mt7921_wfsys_reset(dev);
if (ret)
return ret;
/* init tx queue */
ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7921_TXQ_BAND0,
MT7921_TX_RING_SIZE,

View file

@ -476,12 +476,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
{
int ret;
ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY);
if (ret && mt76_is_mmio(&dev->mt76)) {
dev_dbg(dev->mt76.dev, "Firmware is already download\n");
goto fw_loaded;
}
ret = mt76_connac2_load_patch(&dev->mt76, mt7921_patch_name(dev));
if (ret)
return ret;
@ -504,8 +498,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
return -EIO;
}
fw_loaded:
#ifdef CONFIG_PM
dev->mt76.hw->wiphy->wowlan = &mt76_connac_wowlan_support;
#endif /* CONFIG_PM */

View file

@ -325,6 +325,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
bus_ops->rmw = mt7921_rmw;
dev->mt76.bus = bus_ops;
ret = mt7921e_mcu_fw_pmctrl(dev);
if (ret)
goto err_free_dev;
ret = __mt7921e_mcu_drv_pmctrl(dev);
if (ret)
goto err_free_dev;
@ -333,6 +337,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
(mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
ret = mt7921_wfsys_reset(dev);
if (ret)
goto err_free_dev;
mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);

View file

@ -3026,17 +3026,18 @@ static ssize_t rtw89_debug_priv_send_h2c_set(struct file *filp,
struct rtw89_debugfs_priv *debugfs_priv = filp->private_data;
struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
u8 *h2c;
int ret;
u16 h2c_len = count / 2;
h2c = rtw89_hex2bin_user(rtwdev, user_buf, count);
if (IS_ERR(h2c))
return -EFAULT;
rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len);
ret = rtw89_fw_h2c_raw(rtwdev, h2c, h2c_len);
kfree(h2c);
return count;
return ret ? ret : count;
}
static int

View file

@ -36,7 +36,7 @@ static const struct smcd_ops ism_ops;
static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */
/* a list for fast mapping */
static u8 max_client;
static DEFINE_SPINLOCK(clients_lock);
static DEFINE_MUTEX(clients_lock);
struct ism_dev_list {
struct list_head list;
struct mutex mutex; /* protects ism device list */
@ -47,14 +47,22 @@ static struct ism_dev_list ism_dev_list = {
.mutex = __MUTEX_INITIALIZER(ism_dev_list.mutex),
};
static void ism_setup_forwarding(struct ism_client *client, struct ism_dev *ism)
{
unsigned long flags;
spin_lock_irqsave(&ism->lock, flags);
ism->subs[client->id] = client;
spin_unlock_irqrestore(&ism->lock, flags);
}
int ism_register_client(struct ism_client *client)
{
struct ism_dev *ism;
unsigned long flags;
int i, rc = -ENOSPC;
mutex_lock(&ism_dev_list.mutex);
spin_lock_irqsave(&clients_lock, flags);
mutex_lock(&clients_lock);
for (i = 0; i < MAX_CLIENTS; ++i) {
if (!clients[i]) {
clients[i] = client;
@ -65,12 +73,14 @@ int ism_register_client(struct ism_client *client)
break;
}
}
spin_unlock_irqrestore(&clients_lock, flags);
mutex_unlock(&clients_lock);
if (i < MAX_CLIENTS) {
/* initialize with all devices that we got so far */
list_for_each_entry(ism, &ism_dev_list.list, list) {
ism->priv[i] = NULL;
client->add(ism);
ism_setup_forwarding(client, ism);
}
}
mutex_unlock(&ism_dev_list.mutex);
@ -86,25 +96,32 @@ int ism_unregister_client(struct ism_client *client)
int rc = 0;
mutex_lock(&ism_dev_list.mutex);
spin_lock_irqsave(&clients_lock, flags);
list_for_each_entry(ism, &ism_dev_list.list, list) {
spin_lock_irqsave(&ism->lock, flags);
/* Stop forwarding IRQs and events */
ism->subs[client->id] = NULL;
for (int i = 0; i < ISM_NR_DMBS; ++i) {
if (ism->sba_client_arr[i] == client->id) {
WARN(1, "%s: attempt to unregister '%s' with registered dmb(s)\n",
__func__, client->name);
rc = -EBUSY;
goto err_reg_dmb;
}
}
spin_unlock_irqrestore(&ism->lock, flags);
}
mutex_unlock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
clients[client->id] = NULL;
if (client->id + 1 == max_client)
max_client--;
spin_unlock_irqrestore(&clients_lock, flags);
list_for_each_entry(ism, &ism_dev_list.list, list) {
for (int i = 0; i < ISM_NR_DMBS; ++i) {
if (ism->sba_client_arr[i] == client->id) {
pr_err("%s: attempt to unregister client '%s'"
"with registered dmb(s)\n", __func__,
client->name);
rc = -EBUSY;
goto out;
}
}
}
out:
mutex_unlock(&ism_dev_list.mutex);
mutex_unlock(&clients_lock);
return rc;
err_reg_dmb:
spin_unlock_irqrestore(&ism->lock, flags);
mutex_unlock(&ism_dev_list.mutex);
return rc;
}
EXPORT_SYMBOL_GPL(ism_unregister_client);
@ -328,6 +345,7 @@ int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb,
struct ism_client *client)
{
union ism_reg_dmb cmd;
unsigned long flags;
int ret;
ret = ism_alloc_dmb(ism, dmb);
@ -351,7 +369,9 @@ int ism_register_dmb(struct ism_dev *ism, struct ism_dmb *dmb,
goto out;
}
dmb->dmb_tok = cmd.response.dmb_tok;
spin_lock_irqsave(&ism->lock, flags);
ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = client->id;
spin_unlock_irqrestore(&ism->lock, flags);
out:
return ret;
}
@ -360,6 +380,7 @@ EXPORT_SYMBOL_GPL(ism_register_dmb);
int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
{
union ism_unreg_dmb cmd;
unsigned long flags;
int ret;
memset(&cmd, 0, sizeof(cmd));
@ -368,7 +389,9 @@ int ism_unregister_dmb(struct ism_dev *ism, struct ism_dmb *dmb)
cmd.request.dmb_tok = dmb->dmb_tok;
spin_lock_irqsave(&ism->lock, flags);
ism->sba_client_arr[dmb->sba_idx - ISM_DMB_BIT_OFFSET] = NO_CLIENT;
spin_unlock_irqrestore(&ism->lock, flags);
ret = ism_cmd(ism, &cmd);
if (ret && ret != ISM_ERROR)
@ -491,6 +514,7 @@ static u16 ism_get_chid(struct ism_dev *ism)
static void ism_handle_event(struct ism_dev *ism)
{
struct ism_event *entry;
struct ism_client *clt;
int i;
while ((ism->ieq_idx + 1) != READ_ONCE(ism->ieq->header.idx)) {
@ -499,21 +523,21 @@ static void ism_handle_event(struct ism_dev *ism)
entry = &ism->ieq->entry[ism->ieq_idx];
debug_event(ism_debug_info, 2, entry, sizeof(*entry));
spin_lock(&clients_lock);
for (i = 0; i < max_client; ++i)
if (clients[i])
clients[i]->handle_event(ism, entry);
spin_unlock(&clients_lock);
for (i = 0; i < max_client; ++i) {
clt = ism->subs[i];
if (clt)
clt->handle_event(ism, entry);
}
}
}
static irqreturn_t ism_handle_irq(int irq, void *data)
{
struct ism_dev *ism = data;
struct ism_client *clt;
unsigned long bit, end;
unsigned long *bv;
u16 dmbemask;
u8 client_id;
bv = (void *) &ism->sba->dmb_bits[ISM_DMB_WORD_OFFSET];
end = sizeof(ism->sba->dmb_bits) * BITS_PER_BYTE - ISM_DMB_BIT_OFFSET;
@ -530,8 +554,10 @@ static irqreturn_t ism_handle_irq(int irq, void *data)
dmbemask = ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET];
ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
barrier();
clt = clients[ism->sba_client_arr[bit]];
clt->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask);
client_id = ism->sba_client_arr[bit];
if (unlikely(client_id == NO_CLIENT || !ism->subs[client_id]))
continue;
ism->subs[client_id]->handle_irq(ism, bit + ISM_DMB_BIT_OFFSET, dmbemask);
}
if (ism->sba->e) {
@ -548,20 +574,9 @@ static u64 ism_get_local_gid(struct ism_dev *ism)
return ism->local_gid;
}
static void ism_dev_add_work_func(struct work_struct *work)
{
struct ism_client *client = container_of(work, struct ism_client,
add_work);
client->add(client->tgt_ism);
atomic_dec(&client->tgt_ism->add_dev_cnt);
wake_up(&client->tgt_ism->waitq);
}
static int ism_dev_init(struct ism_dev *ism)
{
struct pci_dev *pdev = ism->pdev;
unsigned long flags;
int i, ret;
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
@ -594,25 +609,16 @@ static int ism_dev_init(struct ism_dev *ism)
/* hardware is V2 capable */
ism_create_system_eid();
init_waitqueue_head(&ism->waitq);
atomic_set(&ism->free_clients_cnt, 0);
atomic_set(&ism->add_dev_cnt, 0);
wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt));
spin_lock_irqsave(&clients_lock, flags);
for (i = 0; i < max_client; ++i)
if (clients[i]) {
INIT_WORK(&clients[i]->add_work,
ism_dev_add_work_func);
clients[i]->tgt_ism = ism;
atomic_inc(&ism->add_dev_cnt);
schedule_work(&clients[i]->add_work);
}
spin_unlock_irqrestore(&clients_lock, flags);
wait_event(ism->waitq, !atomic_read(&ism->add_dev_cnt));
mutex_lock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
for (i = 0; i < max_client; ++i) {
if (clients[i]) {
clients[i]->add(ism);
ism_setup_forwarding(clients[i], ism);
}
}
mutex_unlock(&clients_lock);
list_add(&ism->list, &ism_dev_list.list);
mutex_unlock(&ism_dev_list.mutex);
@ -687,36 +693,24 @@ err_dev:
return ret;
}
static void ism_dev_remove_work_func(struct work_struct *work)
{
struct ism_client *client = container_of(work, struct ism_client,
remove_work);
client->remove(client->tgt_ism);
atomic_dec(&client->tgt_ism->free_clients_cnt);
wake_up(&client->tgt_ism->waitq);
}
/* Callers must hold ism_dev_list.mutex */
static void ism_dev_exit(struct ism_dev *ism)
{
struct pci_dev *pdev = ism->pdev;
unsigned long flags;
int i;
wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt));
spin_lock_irqsave(&clients_lock, flags);
spin_lock_irqsave(&ism->lock, flags);
for (i = 0; i < max_client; ++i)
if (clients[i]) {
INIT_WORK(&clients[i]->remove_work,
ism_dev_remove_work_func);
clients[i]->tgt_ism = ism;
atomic_inc(&ism->free_clients_cnt);
schedule_work(&clients[i]->remove_work);
}
spin_unlock_irqrestore(&clients_lock, flags);
ism->subs[i] = NULL;
spin_unlock_irqrestore(&ism->lock, flags);
wait_event(ism->waitq, !atomic_read(&ism->free_clients_cnt));
mutex_lock(&ism_dev_list.mutex);
mutex_lock(&clients_lock);
for (i = 0; i < max_client; ++i) {
if (clients[i])
clients[i]->remove(ism);
}
mutex_unlock(&clients_lock);
if (SYSTEM_EID.serial_number[0] != '0' ||
SYSTEM_EID.type[0] != '0')
@ -727,15 +721,14 @@ static void ism_dev_exit(struct ism_dev *ism)
kfree(ism->sba_client_arr);
pci_free_irq_vectors(pdev);
list_del_init(&ism->list);
mutex_unlock(&ism_dev_list.mutex);
}
static void ism_remove(struct pci_dev *pdev)
{
struct ism_dev *ism = dev_get_drvdata(&pdev->dev);
mutex_lock(&ism_dev_list.mutex);
ism_dev_exit(ism);
mutex_unlock(&ism_dev_list.mutex);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);

View file

@ -44,9 +44,7 @@ struct ism_dev {
u64 local_gid;
int ieq_idx;
atomic_t free_clients_cnt;
atomic_t add_dev_cnt;
wait_queue_head_t waitq;
struct ism_client *subs[MAX_CLIENTS];
};
struct ism_event {
@ -68,9 +66,6 @@ struct ism_client {
*/
void (*handle_irq)(struct ism_dev *dev, unsigned int bit, u16 dmbemask);
/* Private area - don't touch! */
struct work_struct remove_work;
struct work_struct add_work;
struct ism_dev *tgt_ism;
u8 id;
};

View file

@ -67,6 +67,9 @@ struct nf_conntrack_tuple {
/* The protocol. */
u_int8_t protonum;
/* The direction must be ignored for the tuplehash */
struct { } __nfct_hash_offsetend;
/* The direction (for tuplehash) */
u_int8_t dir;
} dst;

View file

@ -1211,6 +1211,29 @@ int __nft_release_basechain(struct nft_ctx *ctx);
unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);
static inline bool nft_use_inc(u32 *use)
{
if (*use == UINT_MAX)
return false;
(*use)++;
return true;
}
static inline void nft_use_dec(u32 *use)
{
WARN_ON_ONCE((*use)-- == 0);
}
/* For error and abort path: restore use counter to previous state. */
static inline void nft_use_inc_restore(u32 *use)
{
WARN_ON_ONCE(!nft_use_inc(use));
}
#define nft_use_dec_restore nft_use_dec
/**
* struct nft_table - nf_tables table
*
@ -1296,8 +1319,8 @@ struct nft_object {
struct list_head list;
struct rhlist_head rhlhead;
struct nft_object_hash_key key;
u32 genmask:2,
use:30;
u32 genmask:2;
u32 use;
u64 handle;
u16 udlen;
u8 *udata;
@ -1399,8 +1422,8 @@ struct nft_flowtable {
char *name;
int hooknum;
int ops_len;
u32 genmask:2,
use:30;
u32 genmask:2;
u32 use;
u64 handle;
/* runtime data below here */
struct list_head hook_list ____cacheline_aligned;

View file

@ -134,7 +134,7 @@ extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
*/
static inline unsigned int psched_mtu(const struct net_device *dev)
{
return dev->mtu + dev->hard_header_len;
return READ_ONCE(dev->mtu) + dev->hard_header_len;
}
static inline struct net *qdisc_net(struct Qdisc *q)

View file

@ -663,6 +663,7 @@ struct ocelot_ops {
struct flow_stats *stats);
void (*cut_through_fwd)(struct ocelot *ocelot);
void (*tas_clock_adjust)(struct ocelot *ocelot);
void (*tas_guard_bands_update)(struct ocelot *ocelot, int port);
void (*update_stats)(struct ocelot *ocelot);
};
@ -863,12 +864,12 @@ struct ocelot {
struct mutex stat_view_lock;
/* Lock for serializing access to the MAC table */
struct mutex mact_lock;
/* Lock for serializing forwarding domain changes */
/* Lock for serializing forwarding domain changes, including the
* configuration of the Time-Aware Shaper, MAC Merge layer and
* cut-through forwarding, on which it depends
*/
struct mutex fwd_domain_lock;
/* Lock for serializing Time-Aware Shaper changes */
struct mutex tas_lock;
struct workqueue_struct *owq;
u8 ptp:1;

View file

@ -122,22 +122,6 @@ static void get_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
atomic_inc(&rcpu->refcnt);
}
/* called from workqueue, to workaround syscall using preempt_disable */
static void cpu_map_kthread_stop(struct work_struct *work)
{
struct bpf_cpu_map_entry *rcpu;
rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);
/* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,
* as it waits until all in-flight call_rcu() callbacks complete.
*/
rcu_barrier();
/* kthread_stop will wake_up_process and wait for it to complete */
kthread_stop(rcpu->kthread);
}
static void __cpu_map_ring_cleanup(struct ptr_ring *ring)
{
/* The tear-down procedure should have made sure that queue is
@ -165,6 +149,30 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu)
}
}
/* called from workqueue, to workaround syscall using preempt_disable */
static void cpu_map_kthread_stop(struct work_struct *work)
{
struct bpf_cpu_map_entry *rcpu;
int err;
rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq);
/* Wait for flush in __cpu_map_entry_free(), via full RCU barrier,
* as it waits until all in-flight call_rcu() callbacks complete.
*/
rcu_barrier();
/* kthread_stop will wake_up_process and wait for it to complete */
err = kthread_stop(rcpu->kthread);
if (err) {
/* kthread_stop may be called before cpu_map_kthread_run
* is executed, so we need to release the memory related
* to rcpu.
*/
put_cpu_map_entry(rcpu);
}
}
static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu,
struct list_head *listp,
struct xdp_cpumap_stats *stats)

View file

@ -5642,8 +5642,9 @@ continue_func:
verbose(env, "verifier bug. subprog has tail_call and async cb\n");
return -EFAULT;
}
/* async callbacks don't increase bpf prog stack size */
continue;
/* async callbacks don't increase bpf prog stack size unless called directly */
if (!bpf_pseudo_call(insn + i))
continue;
}
i = next_insn;

View file

@ -63,4 +63,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(napi_poll);
EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_send_reset);
EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_bad_csum);
EXPORT_TRACEPOINT_SYMBOL_GPL(udp_fail_queue_rcv_skb);
EXPORT_TRACEPOINT_SYMBOL_GPL(sk_data_ready);

View file

@ -4261,6 +4261,11 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
skb_push(skb, -skb_network_offset(skb) + offset);
/* Ensure the head is writeable before touching the shared info */
err = skb_unclone(skb, GFP_ATOMIC);
if (err)
goto err_linearize;
skb_shinfo(skb)->frag_list = NULL;
while (list_skb) {

View file

@ -741,7 +741,7 @@ __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
__diag_pop();
BTF_SET8_START(xdp_metadata_kfunc_ids)
#define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, 0)
#define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS)
XDP_METADATA_KFUNC_xxx
#undef XDP_METADATA_KFUNC
BTF_SET8_END(xdp_metadata_kfunc_ids)

View file

@ -318,9 +318,8 @@ static void addrconf_del_dad_work(struct inet6_ifaddr *ifp)
static void addrconf_mod_rs_timer(struct inet6_dev *idev,
unsigned long when)
{
if (!timer_pending(&idev->rs_timer))
if (!mod_timer(&idev->rs_timer, jiffies + when))
in6_dev_hold(idev);
mod_timer(&idev->rs_timer, jiffies + when);
}
static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,

View file

@ -424,7 +424,10 @@ static struct net_device *icmp6_dev(const struct sk_buff *skb)
if (unlikely(dev->ifindex == LOOPBACK_IFINDEX || netif_is_l3_master(skb->dev))) {
const struct rt6_info *rt6 = skb_rt6_info(skb);
if (rt6)
/* The destination could be an external IP in Ext Hdr (SRv6, RPL, etc.),
* and ip6_null_entry could be set to skb if no route is found.
*/
if (rt6 && rt6->rt6i_idev)
dev = rt6->rt6i_idev->dev;
}

View file

@ -45,6 +45,7 @@
#include <net/tcp_states.h>
#include <net/ip6_checksum.h>
#include <net/ip6_tunnel.h>
#include <trace/events/udp.h>
#include <net/xfrm.h>
#include <net/inet_hashtables.h>
#include <net/inet6_hashtables.h>
@ -90,7 +91,7 @@ static u32 udp6_ehashfn(const struct net *net,
fhash = __ipv6_addr_jhash(faddr, udp_ipv6_hash_secret);
return __inet6_ehashfn(lhash, lport, fhash, fport,
udp_ipv6_hash_secret + net_hash_mix(net));
udp6_ehash_secret + net_hash_mix(net));
}
int udp_v6_get_port(struct sock *sk, unsigned short snum)
@ -680,6 +681,7 @@ static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
}
UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
kfree_skb_reason(skb, drop_reason);
trace_udp_fail_queue_rcv_skb(rc, sk);
return -1;
}

View file

@ -211,24 +211,18 @@ static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple,
unsigned int zoneid,
const struct net *net)
{
u64 a, b, c, d;
siphash_key_t key;
get_random_once(&nf_conntrack_hash_rnd, sizeof(nf_conntrack_hash_rnd));
/* The direction must be ignored, handle usable tuplehash members manually */
a = (u64)tuple->src.u3.all[0] << 32 | tuple->src.u3.all[3];
b = (u64)tuple->dst.u3.all[0] << 32 | tuple->dst.u3.all[3];
key = nf_conntrack_hash_rnd;
c = (__force u64)tuple->src.u.all << 32 | (__force u64)tuple->dst.u.all << 16;
c |= tuple->dst.protonum;
key.key[0] ^= zoneid;
key.key[1] ^= net_hash_mix(net);
d = (u64)zoneid << 32 | net_hash_mix(net);
/* IPv4: u3.all[1,2,3] == 0 */
c ^= (u64)tuple->src.u3.all[1] << 32 | tuple->src.u3.all[2];
d += (u64)tuple->dst.u3.all[1] << 32 | tuple->dst.u3.all[2];
return (u32)siphash_4u64(a, b, c, d, &nf_conntrack_hash_rnd);
return siphash((void *)tuple,
offsetofend(struct nf_conntrack_tuple, dst.__nfct_hash_offsetend),
&key);
}
static u32 scale_hash(u32 hash)

View file

@ -360,6 +360,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
if (!nf_ct_helper_hash)
return -ENOENT;
if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
return -EINVAL;
@ -515,4 +518,5 @@ int nf_conntrack_helper_init(void)
void nf_conntrack_helper_fini(void)
{
kvfree(nf_ct_helper_hash);
nf_ct_helper_hash = NULL;
}

View file

@ -205,6 +205,8 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
const struct nf_hook_state *state)
{
unsigned long status;
if (!nf_ct_is_confirmed(ct)) {
unsigned int *timeouts = nf_ct_timeout_lookup(ct);
@ -217,11 +219,17 @@ int nf_conntrack_gre_packet(struct nf_conn *ct,
ct->proto.gre.timeout = timeouts[GRE_CT_UNREPLIED];
}
status = READ_ONCE(ct->status);
/* If we've seen traffic both ways, this is a GRE connection.
* Extend timeout. */
if (ct->status & IPS_SEEN_REPLY) {
if (status & IPS_SEEN_REPLY) {
nf_ct_refresh_acct(ct, ctinfo, skb,
ct->proto.gre.stream_timeout);
/* never set ASSURED for IPS_NAT_CLASH, they time out soon */
if (unlikely((status & IPS_NAT_CLASH)))
return NF_ACCEPT;
/* Also, more likely to be important, and not a probe. */
if (!test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
nf_conntrack_event_cache(IPCT_ASSURED, ct);

View file

@ -253,8 +253,10 @@ int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain)
if (chain->bound)
return -EBUSY;
if (!nft_use_inc(&chain->use))
return -EMFILE;
chain->bound = true;
chain->use++;
nft_chain_trans_bind(ctx, chain);
return 0;
@ -437,7 +439,7 @@ static int nft_delchain(struct nft_ctx *ctx)
if (IS_ERR(trans))
return PTR_ERR(trans);
ctx->table->use--;
nft_use_dec(&ctx->table->use);
nft_deactivate_next(ctx->net, ctx->chain);
return 0;
@ -476,7 +478,7 @@ nf_tables_delrule_deactivate(struct nft_ctx *ctx, struct nft_rule *rule)
/* You cannot delete the same rule twice */
if (nft_is_active_next(ctx->net, rule)) {
nft_deactivate_next(ctx->net, rule);
ctx->chain->use--;
nft_use_dec(&ctx->chain->use);
return 0;
}
return -ENOENT;
@ -644,7 +646,7 @@ static int nft_delset(const struct nft_ctx *ctx, struct nft_set *set)
nft_map_deactivate(ctx, set);
nft_deactivate_next(ctx->net, set);
ctx->table->use--;
nft_use_dec(&ctx->table->use);
return err;
}
@ -676,7 +678,7 @@ static int nft_delobj(struct nft_ctx *ctx, struct nft_object *obj)
return err;
nft_deactivate_next(ctx->net, obj);
ctx->table->use--;
nft_use_dec(&ctx->table->use);
return err;
}
@ -711,7 +713,7 @@ static int nft_delflowtable(struct nft_ctx *ctx,
return err;
nft_deactivate_next(ctx->net, flowtable);
ctx->table->use--;
nft_use_dec(&ctx->table->use);
return err;
}
@ -2396,9 +2398,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
struct nft_chain *chain;
int err;
if (table->use == UINT_MAX)
return -EOVERFLOW;
if (nla[NFTA_CHAIN_HOOK]) {
struct nft_stats __percpu *stats = NULL;
struct nft_chain_hook hook = {};
@ -2494,6 +2493,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
if (err < 0)
goto err_destroy_chain;
if (!nft_use_inc(&table->use)) {
err = -EMFILE;
goto err_use;
}
trans = nft_trans_chain_add(ctx, NFT_MSG_NEWCHAIN);
if (IS_ERR(trans)) {
err = PTR_ERR(trans);
@ -2510,10 +2514,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
goto err_unregister_hook;
}
table->use++;
return 0;
err_unregister_hook:
nft_use_dec_restore(&table->use);
err_use:
nf_tables_unregister_hook(net, table, chain);
err_destroy_chain:
nf_tables_chain_destroy(ctx);
@ -2694,7 +2699,7 @@ err_hooks:
static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
const struct nft_table *table,
const struct nlattr *nla)
const struct nlattr *nla, u8 genmask)
{
struct nftables_pernet *nft_net = nft_pernet(net);
u32 id = ntohl(nla_get_be32(nla));
@ -2705,7 +2710,8 @@ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
if (trans->msg_type == NFT_MSG_NEWCHAIN &&
chain->table == table &&
id == nft_trans_chain_id(trans))
id == nft_trans_chain_id(trans) &&
nft_active_genmask(chain, genmask))
return chain;
}
return ERR_PTR(-ENOENT);
@ -3809,7 +3815,8 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return -EOPNOTSUPP;
} else if (nla[NFTA_RULE_CHAIN_ID]) {
chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]);
chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID],
genmask);
if (IS_ERR(chain)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
return PTR_ERR(chain);
@ -3840,9 +3847,6 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return -EINVAL;
handle = nf_tables_alloc_handle(table);
if (chain->use == UINT_MAX)
return -EOVERFLOW;
if (nla[NFTA_RULE_POSITION]) {
pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
old_rule = __nft_rule_lookup(chain, pos_handle);
@ -3936,6 +3940,11 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
}
}
if (!nft_use_inc(&chain->use)) {
err = -EMFILE;
goto err_release_rule;
}
if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
err = nft_delrule(&ctx, old_rule);
if (err < 0)
@ -3967,7 +3976,6 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
}
}
kvfree(expr_info);
chain->use++;
if (flow)
nft_trans_flow_rule(trans) = flow;
@ -3978,6 +3986,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
return 0;
err_destroy_flow_rule:
nft_use_dec_restore(&chain->use);
if (flow)
nft_flow_rule_destroy(flow);
err_release_rule:
@ -5014,9 +5023,15 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
alloc_size = sizeof(*set) + size + udlen;
if (alloc_size < size || alloc_size > INT_MAX)
return -ENOMEM;
if (!nft_use_inc(&table->use))
return -EMFILE;
set = kvzalloc(alloc_size, GFP_KERNEL_ACCOUNT);
if (!set)
return -ENOMEM;
if (!set) {
err = -ENOMEM;
goto err_alloc;
}
name = nla_strdup(nla[NFTA_SET_NAME], GFP_KERNEL_ACCOUNT);
if (!name) {
@ -5074,7 +5089,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
goto err_set_expr_alloc;
list_add_tail_rcu(&set->list, &table->sets);
table->use++;
return 0;
err_set_expr_alloc:
@ -5086,6 +5101,9 @@ err_set_init:
kfree(set->name);
err_set_name:
kvfree(set);
err_alloc:
nft_use_dec_restore(&table->use);
return err;
}
@ -5224,9 +5242,6 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *i;
struct nft_set_iter iter;
if (set->use == UINT_MAX)
return -EOVERFLOW;
if (!list_empty(&set->bindings) && nft_set_is_anonymous(set))
return -EBUSY;
@ -5254,10 +5269,12 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
return iter.err;
}
bind:
if (!nft_use_inc(&set->use))
return -EMFILE;
binding->chain = ctx->chain;
list_add_tail_rcu(&binding->list, &set->bindings);
nft_set_trans_bind(ctx, set);
set->use++;
return 0;
}
@ -5331,7 +5348,7 @@ void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
nft_clear(ctx->net, set);
}
set->use++;
nft_use_inc_restore(&set->use);
}
EXPORT_SYMBOL_GPL(nf_tables_activate_set);
@ -5347,7 +5364,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
else
list_del_rcu(&binding->list);
set->use--;
nft_use_dec(&set->use);
break;
case NFT_TRANS_PREPARE:
if (nft_set_is_anonymous(set)) {
@ -5356,7 +5373,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
nft_deactivate_next(ctx->net, set);
}
set->use--;
nft_use_dec(&set->use);
return;
case NFT_TRANS_ABORT:
case NFT_TRANS_RELEASE:
@ -5364,7 +5381,7 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_deactivate(ctx, set);
set->use--;
nft_use_dec(&set->use);
fallthrough;
default:
nf_tables_unbind_set(ctx, set, binding,
@ -6155,7 +6172,7 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext));
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use--;
nft_use_dec(&(*nft_set_ext_obj(ext))->use);
kfree(elem);
}
EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
@ -6657,8 +6674,16 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
set->objtype, genmask);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
obj = NULL;
goto err_parse_key_end;
}
if (!nft_use_inc(&obj->use)) {
err = -EMFILE;
obj = NULL;
goto err_parse_key_end;
}
err = nft_set_ext_add(&tmpl, NFT_SET_EXT_OBJREF);
if (err < 0)
goto err_parse_key_end;
@ -6727,10 +6752,9 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
if (flags)
*nft_set_ext_flags(ext) = flags;
if (obj) {
if (obj)
*nft_set_ext_obj(ext) = obj;
obj->use++;
}
if (ulen > 0) {
if (nft_set_ext_check(&tmpl, NFT_SET_EXT_USERDATA, ulen) < 0) {
err = -EINVAL;
@ -6798,12 +6822,13 @@ err_element_clash:
kfree(trans);
err_elem_free:
nf_tables_set_elem_destroy(ctx, set, elem.priv);
if (obj)
obj->use--;
err_parse_data:
if (nla[NFTA_SET_ELEM_DATA] != NULL)
nft_data_release(&elem.data.val, desc.type);
err_parse_key_end:
if (obj)
nft_use_dec_restore(&obj->use);
nft_data_release(&elem.key_end.val, NFT_DATA_VALUE);
err_parse_key:
nft_data_release(&elem.key.val, NFT_DATA_VALUE);
@ -6883,7 +6908,7 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
case NFT_JUMP:
case NFT_GOTO:
chain = data->verdict.chain;
chain->use++;
nft_use_inc_restore(&chain->use);
break;
}
}
@ -6898,7 +6923,7 @@ static void nft_setelem_data_activate(const struct net *net,
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_hold(nft_set_ext_data(ext), set->dtype);
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use++;
nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use);
}
static void nft_setelem_data_deactivate(const struct net *net,
@ -6910,7 +6935,7 @@ static void nft_setelem_data_deactivate(const struct net *net,
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_release(nft_set_ext_data(ext), set->dtype);
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
(*nft_set_ext_obj(ext))->use--;
nft_use_dec(&(*nft_set_ext_obj(ext))->use);
}
static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
@ -7453,9 +7478,14 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (!nft_use_inc(&table->use))
return -EMFILE;
type = nft_obj_type_get(net, objtype);
if (IS_ERR(type))
return PTR_ERR(type);
if (IS_ERR(type)) {
err = PTR_ERR(type);
goto err_type;
}
obj = nft_obj_init(&ctx, type, nla[NFTA_OBJ_DATA]);
if (IS_ERR(obj)) {
@ -7489,7 +7519,7 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
goto err_obj_ht;
list_add_tail_rcu(&obj->list, &table->objects);
table->use++;
return 0;
err_obj_ht:
/* queued in transaction log */
@ -7505,6 +7535,9 @@ err_strdup:
kfree(obj);
err_init:
module_put(type->owner);
err_type:
nft_use_dec_restore(&table->use);
return err;
}
@ -7906,7 +7939,7 @@ void nf_tables_deactivate_flowtable(const struct nft_ctx *ctx,
case NFT_TRANS_PREPARE:
case NFT_TRANS_ABORT:
case NFT_TRANS_RELEASE:
flowtable->use--;
nft_use_dec(&flowtable->use);
fallthrough;
default:
return;
@ -8260,9 +8293,14 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
if (!nft_use_inc(&table->use))
return -EMFILE;
flowtable = kzalloc(sizeof(*flowtable), GFP_KERNEL_ACCOUNT);
if (!flowtable)
return -ENOMEM;
if (!flowtable) {
err = -ENOMEM;
goto flowtable_alloc;
}
flowtable->table = table;
flowtable->handle = nf_tables_alloc_handle(table);
@ -8317,7 +8355,6 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
goto err5;
list_add_tail_rcu(&flowtable->list, &table->flowtables);
table->use++;
return 0;
err5:
@ -8334,6 +8371,9 @@ err2:
kfree(flowtable->name);
err1:
kfree(flowtable);
flowtable_alloc:
nft_use_dec_restore(&table->use);
return err;
}
@ -9713,7 +9753,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
*/
if (nft_set_is_anonymous(nft_trans_set(trans)) &&
!list_empty(&nft_trans_set(trans)->bindings))
trans->ctx.table->use--;
nft_use_dec(&trans->ctx.table->use);
}
nf_tables_set_notify(&trans->ctx, nft_trans_set(trans),
NFT_MSG_NEWSET, GFP_KERNEL);
@ -9943,7 +9983,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans);
break;
}
trans->ctx.table->use--;
nft_use_dec_restore(&trans->ctx.table->use);
nft_chain_del(trans->ctx.chain);
nf_tables_unregister_hook(trans->ctx.net,
trans->ctx.table,
@ -9956,7 +9996,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
list_splice(&nft_trans_chain_hooks(trans),
&nft_trans_basechain(trans)->hook_list);
} else {
trans->ctx.table->use++;
nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, trans->ctx.chain);
}
nft_trans_destroy(trans);
@ -9966,7 +10006,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans);
break;
}
trans->ctx.chain->use--;
nft_use_dec_restore(&trans->ctx.chain->use);
list_del_rcu(&nft_trans_rule(trans)->list);
nft_rule_expr_deactivate(&trans->ctx,
nft_trans_rule(trans),
@ -9976,7 +10016,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
break;
case NFT_MSG_DELRULE:
case NFT_MSG_DESTROYRULE:
trans->ctx.chain->use++;
nft_use_inc_restore(&trans->ctx.chain->use);
nft_clear(trans->ctx.net, nft_trans_rule(trans));
nft_rule_expr_activate(&trans->ctx, nft_trans_rule(trans));
if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
@ -9989,7 +10029,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_trans_destroy(trans);
break;
}
trans->ctx.table->use--;
nft_use_dec_restore(&trans->ctx.table->use);
if (nft_trans_set_bound(trans)) {
nft_trans_destroy(trans);
break;
@ -9998,7 +10038,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
break;
case NFT_MSG_DELSET:
case NFT_MSG_DESTROYSET:
trans->ctx.table->use++;
nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_set(trans));
if (nft_trans_set(trans)->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_activate(&trans->ctx, nft_trans_set(trans));
@ -10042,13 +10082,13 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_obj_destroy(&trans->ctx, nft_trans_obj_newobj(trans));
nft_trans_destroy(trans);
} else {
trans->ctx.table->use--;
nft_use_dec_restore(&trans->ctx.table->use);
nft_obj_del(nft_trans_obj(trans));
}
break;
case NFT_MSG_DELOBJ:
case NFT_MSG_DESTROYOBJ:
trans->ctx.table->use++;
nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_obj(trans));
nft_trans_destroy(trans);
break;
@ -10057,7 +10097,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
nft_unregister_flowtable_net_hooks(net,
&nft_trans_flowtable_hooks(trans));
} else {
trans->ctx.table->use--;
nft_use_dec_restore(&trans->ctx.table->use);
list_del_rcu(&nft_trans_flowtable(trans)->list);
nft_unregister_flowtable_net_hooks(net,
&nft_trans_flowtable(trans)->hook_list);
@ -10069,7 +10109,7 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
list_splice(&nft_trans_flowtable_hooks(trans),
&nft_trans_flowtable(trans)->hook_list);
} else {
trans->ctx.table->use++;
nft_use_inc_restore(&trans->ctx.table->use);
nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
}
nft_trans_destroy(trans);
@ -10502,7 +10542,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
genmask);
} else if (tb[NFTA_VERDICT_CHAIN_ID]) {
chain = nft_chain_lookup_byid(ctx->net, ctx->table,
tb[NFTA_VERDICT_CHAIN_ID]);
tb[NFTA_VERDICT_CHAIN_ID],
genmask);
if (IS_ERR(chain))
return PTR_ERR(chain);
} else {
@ -10518,8 +10559,9 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
if (desc->flags & NFT_DATA_DESC_SETELEM &&
chain->flags & NFT_CHAIN_BINDING)
return -EINVAL;
if (!nft_use_inc(&chain->use))
return -EMFILE;
chain->use++;
data->verdict.chain = chain;
break;
}
@ -10537,7 +10579,7 @@ static void nft_verdict_uninit(const struct nft_data *data)
case NFT_JUMP:
case NFT_GOTO:
chain = data->verdict.chain;
chain->use--;
nft_use_dec(&chain->use);
break;
}
}
@ -10706,11 +10748,11 @@ int __nft_release_basechain(struct nft_ctx *ctx)
nf_tables_unregister_hook(ctx->net, ctx->chain->table, ctx->chain);
list_for_each_entry_safe(rule, nr, &ctx->chain->rules, list) {
list_del(&rule->list);
ctx->chain->use--;
nft_use_dec(&ctx->chain->use);
nf_tables_rule_release(ctx, rule);
}
nft_chain_del(ctx->chain);
ctx->table->use--;
nft_use_dec(&ctx->table->use);
nf_tables_chain_destroy(ctx);
return 0;
@ -10760,18 +10802,18 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
ctx.chain = chain;
list_for_each_entry_safe(rule, nr, &chain->rules, list) {
list_del(&rule->list);
chain->use--;
nft_use_dec(&chain->use);
nf_tables_rule_release(&ctx, rule);
}
}
list_for_each_entry_safe(flowtable, nf, &table->flowtables, list) {
list_del(&flowtable->list);
table->use--;
nft_use_dec(&table->use);
nf_tables_flowtable_destroy(flowtable);
}
list_for_each_entry_safe(set, ns, &table->sets, list) {
list_del(&set->list);
table->use--;
nft_use_dec(&table->use);
if (set->flags & (NFT_SET_MAP | NFT_SET_OBJECT))
nft_map_deactivate(&ctx, set);
@ -10779,13 +10821,13 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
}
list_for_each_entry_safe(obj, ne, &table->objects, list) {
nft_obj_del(obj);
table->use--;
nft_use_dec(&table->use);
nft_obj_destroy(&ctx, obj);
}
list_for_each_entry_safe(chain, nc, &table->chains, list) {
ctx.chain = chain;
nft_chain_del(chain);
table->use--;
nft_use_dec(&table->use);
nf_tables_chain_destroy(&ctx);
}
nf_tables_table_destroy(&ctx);

View file

@ -30,11 +30,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
const struct nft_byteorder *priv = nft_expr_priv(expr);
u32 *src = &regs->data[priv->sreg];
u32 *dst = &regs->data[priv->dreg];
union { u32 u32; u16 u16; } *s, *d;
u16 *s16, *d16;
unsigned int i;
s = (void *)src;
d = (void *)dst;
s16 = (void *)src;
d16 = (void *)dst;
switch (priv->size) {
case 8: {
@ -62,11 +62,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
switch (priv->op) {
case NFT_BYTEORDER_NTOH:
for (i = 0; i < priv->len / 4; i++)
d[i].u32 = ntohl((__force __be32)s[i].u32);
dst[i] = ntohl((__force __be32)src[i]);
break;
case NFT_BYTEORDER_HTON:
for (i = 0; i < priv->len / 4; i++)
d[i].u32 = (__force __u32)htonl(s[i].u32);
dst[i] = (__force __u32)htonl(src[i]);
break;
}
break;
@ -74,11 +74,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
switch (priv->op) {
case NFT_BYTEORDER_NTOH:
for (i = 0; i < priv->len / 2; i++)
d[i].u16 = ntohs((__force __be16)s[i].u16);
d16[i] = ntohs((__force __be16)s16[i]);
break;
case NFT_BYTEORDER_HTON:
for (i = 0; i < priv->len / 2; i++)
d[i].u16 = (__force __u16)htons(s[i].u16);
d16[i] = (__force __u16)htons(s16[i]);
break;
}
break;

View file

@ -408,8 +408,10 @@ static int nft_flow_offload_init(const struct nft_ctx *ctx,
if (IS_ERR(flowtable))
return PTR_ERR(flowtable);
if (!nft_use_inc(&flowtable->use))
return -EMFILE;
priv->flowtable = flowtable;
flowtable->use++;
return nf_ct_netns_get(ctx->net, ctx->family);
}
@ -428,7 +430,7 @@ static void nft_flow_offload_activate(const struct nft_ctx *ctx,
{
struct nft_flow_offload *priv = nft_expr_priv(expr);
priv->flowtable->use++;
nft_use_inc_restore(&priv->flowtable->use);
}
static void nft_flow_offload_destroy(const struct nft_ctx *ctx,

View file

@ -159,7 +159,7 @@ static void nft_immediate_deactivate(const struct nft_ctx *ctx,
default:
nft_chain_del(chain);
chain->bound = false;
chain->table->use--;
nft_use_dec(&chain->table->use);
break;
}
break;
@ -198,7 +198,7 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
* let the transaction records release this chain and its rules.
*/
if (chain->bound) {
chain->use--;
nft_use_dec(&chain->use);
break;
}
@ -206,9 +206,9 @@ static void nft_immediate_destroy(const struct nft_ctx *ctx,
chain_ctx = *ctx;
chain_ctx.chain = chain;
chain->use--;
nft_use_dec(&chain->use);
list_for_each_entry_safe(rule, n, &chain->rules, list) {
chain->use--;
nft_use_dec(&chain->use);
list_del(&rule->list);
nf_tables_rule_destroy(&chain_ctx, rule);
}

View file

@ -41,8 +41,10 @@ static int nft_objref_init(const struct nft_ctx *ctx,
if (IS_ERR(obj))
return -ENOENT;
if (!nft_use_inc(&obj->use))
return -EMFILE;
nft_objref_priv(expr) = obj;
obj->use++;
return 0;
}
@ -72,7 +74,7 @@ static void nft_objref_deactivate(const struct nft_ctx *ctx,
if (phase == NFT_TRANS_COMMIT)
return;
obj->use--;
nft_use_dec(&obj->use);
}
static void nft_objref_activate(const struct nft_ctx *ctx,
@ -80,7 +82,7 @@ static void nft_objref_activate(const struct nft_ctx *ctx,
{
struct nft_object *obj = nft_objref_priv(expr);
obj->use++;
nft_use_inc_restore(&obj->use);
}
static const struct nft_expr_ops nft_objref_ops = {

View file

@ -1320,7 +1320,7 @@ struct tc_action_ops *tc_action_load_ops(struct nlattr *nla, bool police,
return ERR_PTR(err);
}
} else {
if (strlcpy(act_name, "police", IFNAMSIZ) >= IFNAMSIZ) {
if (strscpy(act_name, "police", IFNAMSIZ) < 0) {
NL_SET_ERR_MSG(extack, "TC action name too long");
return ERR_PTR(-EINVAL);
}

View file

@ -812,6 +812,16 @@ static int fl_set_key_port_range(struct nlattr **tb, struct fl_flow_key *key,
TCA_FLOWER_KEY_PORT_SRC_MAX, &mask->tp_range.tp_max.src,
TCA_FLOWER_UNSPEC, sizeof(key->tp_range.tp_max.src));
if (mask->tp_range.tp_min.dst != mask->tp_range.tp_max.dst) {
NL_SET_ERR_MSG(extack,
"Both min and max destination ports must be specified");
return -EINVAL;
}
if (mask->tp_range.tp_min.src != mask->tp_range.tp_max.src) {
NL_SET_ERR_MSG(extack,
"Both min and max source ports must be specified");
return -EINVAL;
}
if (mask->tp_range.tp_min.dst && mask->tp_range.tp_max.dst &&
ntohs(key->tp_range.tp_max.dst) <=
ntohs(key->tp_range.tp_min.dst)) {

View file

@ -212,11 +212,6 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
if (err < 0)
return err;
if (tb[TCA_FW_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
if (tb[TCA_FW_INDEV]) {
int ret;
ret = tcf_change_indev(net, tb[TCA_FW_INDEV], extack);
@ -233,6 +228,11 @@ static int fw_set_parms(struct net *net, struct tcf_proto *tp,
} else if (head->mask != 0xFFFFFFFF)
return err;
if (tb[TCA_FW_CLASSID]) {
f->res.classid = nla_get_u32(tb[TCA_FW_CLASSID]);
tcf_bind_filter(tp, &f->res, base);
}
return 0;
}

View file

@ -381,8 +381,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight,
u32 lmax)
{
struct qfq_sched *q = qdisc_priv(sch);
struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight);
struct qfq_aggregate *new_agg;
/* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */
if (lmax > QFQ_MAX_LMAX)
return -EINVAL;
new_agg = qfq_find_agg(q, lmax, weight);
if (new_agg == NULL) { /* create new aggregate */
new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC);
if (new_agg == NULL)
@ -423,10 +428,17 @@ static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
else
weight = 1;
if (tb[TCA_QFQ_LMAX])
if (tb[TCA_QFQ_LMAX]) {
lmax = nla_get_u32(tb[TCA_QFQ_LMAX]);
else
} else {
/* MTU size is user controlled */
lmax = psched_mtu(qdisc_dev(sch));
if (lmax < QFQ_MIN_LMAX || lmax > QFQ_MAX_LMAX) {
NL_SET_ERR_MSG_MOD(extack,
"MTU size out of bounds for qfq");
return -EINVAL;
}
}
inv_w = ONE_FP / weight;
weight = ONE_FP / inv_w;

View file

@ -580,6 +580,8 @@ int ieee80211_strip_8023_mesh_hdr(struct sk_buff *skb)
hdrlen += ETH_ALEN + 2;
else if (!pskb_may_pull(skb, hdrlen))
return -EINVAL;
else
payload.eth.h_proto = htons(skb->len - hdrlen);
mesh_addr = skb->data + sizeof(payload.eth) + ETH_ALEN;
switch (payload.flags & MESH_FLAGS_AE) {

View file

@ -0,0 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
#include "async_stack_depth.skel.h"
void test_async_stack_depth(void)
{
RUN_TESTS(async_stack_depth);
}

View file

@ -0,0 +1,40 @@
// SPDX-License-Identifier: GPL-2.0
#include <vmlinux.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
struct hmap_elem {
struct bpf_timer timer;
};
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 64);
__type(key, int);
__type(value, struct hmap_elem);
} hmap SEC(".maps");
__attribute__((noinline))
static int timer_cb(void *map, int *key, struct bpf_timer *timer)
{
volatile char buf[256] = {};
return buf[69];
}
SEC("tc")
__failure __msg("combined stack size of 2 calls")
int prog(struct __sk_buff *ctx)
{
struct hmap_elem *elem;
volatile char buf[256] = {};
elem = bpf_map_lookup_elem(&hmap, &(int){0});
if (!elem)
return 0;
timer_cb(NULL, NULL, NULL);
return bpf_timer_set_callback(&elem->timer, timer_cb) + buf[0];
}
char _license[] SEC("license") = "GPL";

View file

@ -213,5 +213,91 @@
"$TC qdisc del dev $DUMMY handle 1: root",
"$IP link del dev $DUMMY type dummy"
]
},
{
"id": "85ee",
"name": "QFQ with big MTU",
"category": [
"qdisc",
"qfq"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 2147483647 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"expExitCode": "2",
"verifyCmd": "$TC class show dev $DUMMY",
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
"$IP link del dev $DUMMY type dummy"
]
},
{
"id": "ddfa",
"name": "QFQ with small MTU",
"category": [
"qdisc",
"qfq"
],
"plugins": {
"requires": "nsPlugin"
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 256 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"expExitCode": "2",
"verifyCmd": "$TC class show dev $DUMMY",
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
"$IP link del dev $DUMMY type dummy"
]
},
{
"id": "5993",
"name": "QFQ with stab overhead greater than max packet len",
"category": [
"qdisc",
"qfq",
"scapy"
],
"plugins": {
"requires": [
"nsPlugin",
"scapyPlugin"
]
},
"setup": [
"$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY up || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: stab mtu 2048 tsize 512 mpu 0 overhead 999999999 linklayer ethernet root qfq",
"$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
"$TC qdisc add dev $DEV1 clsact",
"$TC filter add dev $DEV1 ingress protocol ip flower dst_ip 1.3.3.7/32 action mirred egress mirror dev $DUMMY"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: matchall classid 1:1",
"scapy": [
{
"iface": "$DEV0",
"count": 22,
"packet": "Ether(type=0x800)/IP(src='10.0.0.10',dst='1.3.3.7')/TCP(sport=5000,dport=10)"
}
],
"expExitCode": "0",
"verifyCmd": "$TC -s qdisc ls dev $DUMMY",
"matchPattern": "dropped 22",
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root qfq"
]
}
]