mirror of
https://github.com/Fishwaldo/Star64_linux.git
synced 2025-03-16 04:04:06 +00:00
Networking fixes for 5.15-rc6.
Current release - regressions: - af_unix: rename UNIX-DGRAM to UNIX to maintain backwards compatibility - procfs: revert "add seq_puts() statement for dev_mcast", minor format change broke user space Current release - new code bugs: - dsa: fix bridge_num not getting cleared after ports leaving the bridge, resource leak - dsa: tag_dsa: send packets with TX fwd offload from VLAN-unaware bridges using VID 0, prevent packet drops if pvid is removed - dsa: mv88e6xxx: keep the pvid at 0 when VLAN-unaware, prevent HW getting confused about station to VLAN mapping Previous releases - regressions: - virtio-net: fix for skb_over_panic inside big mode - phy: do not shutdown PHYs in READY state - dsa: mv88e6xxx: don't use PHY_DETECT on internal PHY's, fix link LED staying lit after ifdown - mptcp: fix possible infinite wait on recvmsg(MSG_WAITALL) - mqprio: Correct stats in mqprio_dump_class_stats() - ice: fix deadlock for Tx timestamp tracking flush - stmmac: fix feature detection on old hardware Previous releases - always broken: - sctp: account stream padding length for reconf chunk - icmp: fix icmp_ext_echo_iio parsing in icmp_build_probe() - isdn: cpai: check ctr->cnr to avoid array index out of bound - isdn: mISDN: fix sleeping function called from invalid context - nfc: nci: fix potential UAF of rf_conn_info object - dsa: microchip: prevent ksz_mib_read_work from kicking back in after it's canceled in .remove and crashing - dsa: mv88e6xxx: isolate the ATU databases of standalone and bridged ports - dsa: sja1105, ocelot: break circular dependency between switch and tag drivers - dsa: felix: improve timestamping in presence of packe loss - mlxsw: thermal: fix out-of-bounds memory accesses Misc: - ipv6: ioam: move the check for undefined bits to improve interoperability Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmFoSAcACgkQMUZtbf5S Irupdw/7BAWMN6LZ/tmnDJMO9st3TPVKfd9hE8P0sl3YMw568kC61nNLei9k34Pl 7GfQRjBnalnr5ueX9hZHZmJBqj0XfXP4ZLjCoTNNfwG3mgoZ34BRODxgM60hnvK/ VFFG5z1bEwPRXDm5urgOmbtVadUXDu/6uZHC/SxnPpy4LlLkpCigUM9FMFaOOx1q vJu/0D0RGPv+ukBTyLwyZ9ux1erzD8UAR9uVA8HMFYpSH5MFDG+DmsWHT/IC+0Jl TbWmltj9ED5kKqfQxW5gW/xc30H5o33SAzAM1/l6dnHhGfjoKqr5+6MdgAYNT3Y3 6VcNyMArqqJF/+gBFiRzBJek/K5w40bW+EXLGIaa/BdJtJg6UrMhSlcmE3My+4WU vFp1+kTuLhxSp7co319IcuHTaPnvw9U7NUmdoOCDMOdbTPT369VNjDs9PN3SXhO7 6mXUNPyS9zycZfBYkCRd5uWHjWBMvImY6VdrTaPsWCBrtWjZY7+HProKcUxLnD6t AwhEsVlrxVJKqNPRjtB9/NzqlXxW5TEuPKHzGK90ZWRdnErj5pDWLbQiG2bcIvZ6 JHYZeWHhKyRADj29KzvD3nFJODzK8fqkYTK0k//dTbmFsVwRnCGrKM13Dt8f5Cly /FZsISOxq7JIaWQVdkoOOx+9P50dxWYN2Ibzl+upFJqs9ZNvbKA= =/K9E -----END PGP SIGNATURE----- Merge tag 'net-5.15-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Quite calm. The noisy DSA driver (embedded switches) changes, and adjustment to IPv6 IOAM behavior add to diffstat's bottom line but are not scary. Current release - regressions: - af_unix: rename UNIX-DGRAM to UNIX to maintain backwards compatibility - procfs: revert "add seq_puts() statement for dev_mcast", minor format change broke user space Current release - new code bugs: - dsa: fix bridge_num not getting cleared after ports leaving the bridge, resource leak - dsa: tag_dsa: send packets with TX fwd offload from VLAN-unaware bridges using VID 0, prevent packet drops if pvid is removed - dsa: mv88e6xxx: keep the pvid at 0 when VLAN-unaware, prevent HW getting confused about station to VLAN mapping Previous releases - regressions: - virtio-net: fix for skb_over_panic inside big mode - phy: do not shutdown PHYs in READY state - dsa: mv88e6xxx: don't use PHY_DETECT on internal PHY's, fix link LED staying lit after ifdown - mptcp: fix possible infinite wait on recvmsg(MSG_WAITALL) - mqprio: Correct stats in mqprio_dump_class_stats() - ice: fix deadlock for Tx timestamp tracking flush - stmmac: fix feature detection on old hardware Previous releases - always broken: - sctp: account stream padding length for reconf chunk - icmp: fix icmp_ext_echo_iio parsing in icmp_build_probe() - isdn: cpai: check ctr->cnr to avoid array index out of bound - isdn: mISDN: fix sleeping function called from invalid context - nfc: nci: fix potential UAF of rf_conn_info object - dsa: microchip: prevent ksz_mib_read_work from kicking back in after it's canceled in .remove and crashing - dsa: mv88e6xxx: isolate the ATU databases of standalone and bridged ports - dsa: sja1105, ocelot: break circular dependency between switch and tag drivers - dsa: felix: improve timestamping in presence of packe loss - mlxsw: thermal: fix out-of-bounds memory accesses Misc: - ipv6: ioam: move the check for undefined bits to improve interoperability" * tag 'net-5.15-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (60 commits) icmp: fix icmp_ext_echo_iio parsing in icmp_build_probe MAINTAINERS: Update the devicetree documentation path of imx fec driver sctp: account stream padding length for reconf chunk mlxsw: thermal: Fix out-of-bounds memory accesses ethernet: s2io: fix setting mac address during resume NFC: digital: fix possible memory leak in digital_in_send_sdd_req() NFC: digital: fix possible memory leak in digital_tg_listen_mdaa() nfc: fix error handling of nfc_proto_register() Revert "net: procfs: add seq_puts() statement for dev_mcast" net: encx24j600: check error in devm_regmap_init_encx24j600 net: korina: select CRC32 net: arc: select CRC32 net: dsa: felix: break at first CPU port during init and teardown net: dsa: tag_ocelot_8021q: fix inability to inject STP BPDUs into BLOCKING ports net: dsa: felix: purge skb from TX timestamping queue if it cannot be sent net: dsa: tag_ocelot_8021q: break circular dependency with ocelot switch lib net: dsa: tag_ocelot: break circular dependency with ocelot switch lib driver net: mscc: ocelot: cross-check the sequence id from the timestamp FIFO with the skb PTP header net: mscc: ocelot: deny TX timestamping of non-PTP packets net: mscc: ocelot: warn when a PTP IRQ is raised for an unknown skb ...
This commit is contained in:
commit
ec681c53f8
74 changed files with 1013 additions and 589 deletions
|
@ -21,6 +21,7 @@ select:
|
|||
contains:
|
||||
enum:
|
||||
- snps,dwmac
|
||||
- snps,dwmac-3.40a
|
||||
- snps,dwmac-3.50a
|
||||
- snps,dwmac-3.610
|
||||
- snps,dwmac-3.70a
|
||||
|
@ -76,6 +77,7 @@ properties:
|
|||
- rockchip,rk3399-gmac
|
||||
- rockchip,rv1108-gmac
|
||||
- snps,dwmac
|
||||
- snps,dwmac-3.40a
|
||||
- snps,dwmac-3.50a
|
||||
- snps,dwmac-3.610
|
||||
- snps,dwmac-3.70a
|
||||
|
|
|
@ -7440,7 +7440,7 @@ FREESCALE IMX / MXC FEC DRIVER
|
|||
M: Joakim Zhang <qiangqing.zhang@nxp.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/net/fsl-fec.txt
|
||||
F: Documentation/devicetree/bindings/net/fsl,fec.yaml
|
||||
F: drivers/net/ethernet/freescale/fec.h
|
||||
F: drivers/net/ethernet/freescale/fec_main.c
|
||||
F: drivers/net/ethernet/freescale/fec_ptp.c
|
||||
|
@ -11153,6 +11153,7 @@ S: Maintained
|
|||
F: Documentation/devicetree/bindings/net/dsa/marvell.txt
|
||||
F: Documentation/networking/devlink/mv88e6xxx.rst
|
||||
F: drivers/net/dsa/mv88e6xxx/
|
||||
F: include/linux/dsa/mv88e6xxx.h
|
||||
F: include/linux/platform_data/mv88e6xxx.h
|
||||
|
||||
MARVELL ARMADA 3700 PHY DRIVERS
|
||||
|
|
|
@ -47,7 +47,7 @@
|
|||
};
|
||||
|
||||
gmac: eth@e0800000 {
|
||||
compatible = "st,spear600-gmac";
|
||||
compatible = "snps,dwmac-3.40a";
|
||||
reg = <0xe0800000 0x8000>;
|
||||
interrupts = <23 22>;
|
||||
interrupt-names = "macirq", "eth_wake_irq";
|
||||
|
|
|
@ -480,6 +480,11 @@ int detach_capi_ctr(struct capi_ctr *ctr)
|
|||
|
||||
ctr_down(ctr, CAPI_CTR_DETACHED);
|
||||
|
||||
if (ctr->cnr < 1 || ctr->cnr - 1 >= CAPI_MAXCONTR) {
|
||||
err = -EINVAL;
|
||||
goto unlock_out;
|
||||
}
|
||||
|
||||
if (capi_controller[ctr->cnr - 1] != ctr) {
|
||||
err = -EINVAL;
|
||||
goto unlock_out;
|
||||
|
|
|
@ -949,8 +949,8 @@ nj_release(struct tiger_hw *card)
|
|||
nj_disable_hwirq(card);
|
||||
mode_tiger(&card->bc[0], ISDN_P_NONE);
|
||||
mode_tiger(&card->bc[1], ISDN_P_NONE);
|
||||
card->isac.release(&card->isac);
|
||||
spin_unlock_irqrestore(&card->lock, flags);
|
||||
card->isac.release(&card->isac);
|
||||
release_region(card->base, card->base_s);
|
||||
card->base_s = 0;
|
||||
}
|
||||
|
|
|
@ -449,8 +449,10 @@ EXPORT_SYMBOL(ksz_switch_register);
|
|||
void ksz_switch_remove(struct ksz_device *dev)
|
||||
{
|
||||
/* timer started */
|
||||
if (dev->mib_read_interval)
|
||||
if (dev->mib_read_interval) {
|
||||
dev->mib_read_interval = 0;
|
||||
cancel_delayed_work_sync(&dev->mib_read);
|
||||
}
|
||||
|
||||
dev->dev_ops->exit(dev);
|
||||
dsa_unregister_switch(dev->ds);
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/dsa/mv88e6xxx.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/ethtool.h>
|
||||
#include <linux/if_bridge.h>
|
||||
|
@ -749,7 +750,11 @@ static void mv88e6xxx_mac_link_down(struct dsa_switch *ds, int port,
|
|||
ops = chip->info->ops;
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
if ((!mv88e6xxx_port_ppu_updates(chip, port) ||
|
||||
/* Internal PHYs propagate their configuration directly to the MAC.
|
||||
* External PHYs depend on whether the PPU is enabled for this port.
|
||||
*/
|
||||
if (((!mv88e6xxx_phy_is_internal(ds, port) &&
|
||||
!mv88e6xxx_port_ppu_updates(chip, port)) ||
|
||||
mode == MLO_AN_FIXED) && ops->port_sync_link)
|
||||
err = ops->port_sync_link(chip, port, mode, false);
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
@ -772,7 +777,12 @@ static void mv88e6xxx_mac_link_up(struct dsa_switch *ds, int port,
|
|||
ops = chip->info->ops;
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {
|
||||
/* Internal PHYs propagate their configuration directly to the MAC.
|
||||
* External PHYs depend on whether the PPU is enabled for this port.
|
||||
*/
|
||||
if ((!mv88e6xxx_phy_is_internal(ds, port) &&
|
||||
!mv88e6xxx_port_ppu_updates(chip, port)) ||
|
||||
mode == MLO_AN_FIXED) {
|
||||
/* FIXME: for an automedia port, should we force the link
|
||||
* down here - what if the link comes up due to "other" media
|
||||
* while we're bringing the port up, how is the exclusivity
|
||||
|
@ -1677,6 +1687,30 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int mv88e6xxx_port_commit_pvid(struct mv88e6xxx_chip *chip, int port)
|
||||
{
|
||||
struct dsa_port *dp = dsa_to_port(chip->ds, port);
|
||||
struct mv88e6xxx_port *p = &chip->ports[port];
|
||||
u16 pvid = MV88E6XXX_VID_STANDALONE;
|
||||
bool drop_untagged = false;
|
||||
int err;
|
||||
|
||||
if (dp->bridge_dev) {
|
||||
if (br_vlan_enabled(dp->bridge_dev)) {
|
||||
pvid = p->bridge_pvid.vid;
|
||||
drop_untagged = !p->bridge_pvid.valid;
|
||||
} else {
|
||||
pvid = MV88E6XXX_VID_BRIDGED;
|
||||
}
|
||||
}
|
||||
|
||||
err = mv88e6xxx_port_set_pvid(chip, port, pvid);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return mv88e6xxx_port_drop_untagged(chip, port, drop_untagged);
|
||||
}
|
||||
|
||||
static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
|
||||
bool vlan_filtering,
|
||||
struct netlink_ext_ack *extack)
|
||||
|
@ -1690,7 +1724,16 @@ static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
|
|||
return -EOPNOTSUPP;
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
|
||||
err = mv88e6xxx_port_set_8021q_mode(chip, port, mode);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
unlock:
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
||||
return err;
|
||||
|
@ -1725,11 +1768,15 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
|
|||
u16 fid;
|
||||
int err;
|
||||
|
||||
/* Null VLAN ID corresponds to the port private database */
|
||||
/* Ports have two private address databases: one for when the port is
|
||||
* standalone and one for when the port is under a bridge and the
|
||||
* 802.1Q mode is disabled. When the port is standalone, DSA wants its
|
||||
* address database to remain 100% empty, so we never load an ATU entry
|
||||
* into a standalone port's database. Therefore, translate the null
|
||||
* VLAN ID into the port's database used for VLAN-unaware bridging.
|
||||
*/
|
||||
if (vid == 0) {
|
||||
err = mv88e6xxx_port_get_fid(chip, port, &fid);
|
||||
if (err)
|
||||
return err;
|
||||
fid = MV88E6XXX_FID_BRIDGED;
|
||||
} else {
|
||||
err = mv88e6xxx_vtu_get(chip, vid, &vlan);
|
||||
if (err)
|
||||
|
@ -2123,6 +2170,7 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
|
|||
struct mv88e6xxx_chip *chip = ds->priv;
|
||||
bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
|
||||
bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
|
||||
struct mv88e6xxx_port *p = &chip->ports[port];
|
||||
bool warn;
|
||||
u8 member;
|
||||
int err;
|
||||
|
@ -2156,13 +2204,21 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
|
|||
}
|
||||
|
||||
if (pvid) {
|
||||
err = mv88e6xxx_port_set_pvid(chip, port, vlan->vid);
|
||||
if (err) {
|
||||
dev_err(ds->dev, "p%d: failed to set PVID %d\n",
|
||||
port, vlan->vid);
|
||||
p->bridge_pvid.vid = vlan->vid;
|
||||
p->bridge_pvid.valid = true;
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
goto out;
|
||||
} else if (vlan->vid && p->bridge_pvid.vid == vlan->vid) {
|
||||
/* The old pvid was reinstalled as a non-pvid VLAN */
|
||||
p->bridge_pvid.valid = false;
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
||||
|
@ -2212,6 +2268,7 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
|
|||
const struct switchdev_obj_port_vlan *vlan)
|
||||
{
|
||||
struct mv88e6xxx_chip *chip = ds->priv;
|
||||
struct mv88e6xxx_port *p = &chip->ports[port];
|
||||
int err = 0;
|
||||
u16 pvid;
|
||||
|
||||
|
@ -2229,7 +2286,9 @@ static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
|
|||
goto unlock;
|
||||
|
||||
if (vlan->vid == pvid) {
|
||||
err = mv88e6xxx_port_set_pvid(chip, port, 0);
|
||||
p->bridge_pvid.valid = false;
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
goto unlock;
|
||||
}
|
||||
|
@ -2393,7 +2452,16 @@ static int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
|
|||
int err;
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
|
||||
err = mv88e6xxx_bridge_map(chip, br);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
unlock:
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
||||
return err;
|
||||
|
@ -2403,11 +2471,20 @@ static void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port,
|
|||
struct net_device *br)
|
||||
{
|
||||
struct mv88e6xxx_chip *chip = ds->priv;
|
||||
int err;
|
||||
|
||||
mv88e6xxx_reg_lock(chip);
|
||||
|
||||
if (mv88e6xxx_bridge_map(chip, br) ||
|
||||
mv88e6xxx_port_vlan_map(chip, port))
|
||||
dev_err(ds->dev, "failed to remap in-chip Port VLAN\n");
|
||||
|
||||
err = mv88e6xxx_port_commit_pvid(chip, port);
|
||||
if (err)
|
||||
dev_err(ds->dev,
|
||||
"port %d failed to restore standalone pvid: %pe\n",
|
||||
port, ERR_PTR(err));
|
||||
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
}
|
||||
|
||||
|
@ -2853,6 +2930,20 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
/* Associate MV88E6XXX_VID_BRIDGED with MV88E6XXX_FID_BRIDGED in the
|
||||
* ATU by virtue of the fact that mv88e6xxx_atu_new() will pick it as
|
||||
* the first free FID after MV88E6XXX_FID_STANDALONE. This will be used
|
||||
* as the private PVID on ports under a VLAN-unaware bridge.
|
||||
* Shared (DSA and CPU) ports must also be members of it, to translate
|
||||
* the VID from the DSA tag into MV88E6XXX_FID_BRIDGED, instead of
|
||||
* relying on their port default FID.
|
||||
*/
|
||||
err = mv88e6xxx_port_vlan_join(chip, port, MV88E6XXX_VID_BRIDGED,
|
||||
MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_UNTAGGED,
|
||||
false);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (chip->info->ops->port_set_jumbo_size) {
|
||||
err = chip->info->ops->port_set_jumbo_size(chip, port, 10218);
|
||||
if (err)
|
||||
|
@ -2925,7 +3016,7 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
|
|||
* database, and allow bidirectional communication between the
|
||||
* CPU and DSA port(s), and the other ports.
|
||||
*/
|
||||
err = mv88e6xxx_port_set_fid(chip, port, 0);
|
||||
err = mv88e6xxx_port_set_fid(chip, port, MV88E6XXX_FID_STANDALONE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -3115,6 +3206,10 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
|
|||
}
|
||||
}
|
||||
|
||||
err = mv88e6xxx_vtu_setup(chip);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
/* Setup Switch Port Registers */
|
||||
for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {
|
||||
if (dsa_is_unused_port(ds, i))
|
||||
|
@ -3144,10 +3239,6 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
|
|||
if (err)
|
||||
goto unlock;
|
||||
|
||||
err = mv88e6xxx_vtu_setup(chip);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
||||
err = mv88e6xxx_pvt_setup(chip);
|
||||
if (err)
|
||||
goto unlock;
|
||||
|
|
|
@ -21,6 +21,9 @@
|
|||
#define EDSA_HLEN 8
|
||||
#define MV88E6XXX_N_FID 4096
|
||||
|
||||
#define MV88E6XXX_FID_STANDALONE 0
|
||||
#define MV88E6XXX_FID_BRIDGED 1
|
||||
|
||||
/* PVT limits for 4-bit port and 5-bit switch */
|
||||
#define MV88E6XXX_MAX_PVT_SWITCHES 32
|
||||
#define MV88E6XXX_MAX_PVT_PORTS 16
|
||||
|
@ -246,9 +249,15 @@ struct mv88e6xxx_policy {
|
|||
u16 vid;
|
||||
};
|
||||
|
||||
struct mv88e6xxx_vlan {
|
||||
u16 vid;
|
||||
bool valid;
|
||||
};
|
||||
|
||||
struct mv88e6xxx_port {
|
||||
struct mv88e6xxx_chip *chip;
|
||||
int port;
|
||||
struct mv88e6xxx_vlan bridge_pvid;
|
||||
u64 serdes_stats[2];
|
||||
u64 atu_member_violation;
|
||||
u64 atu_miss_violation;
|
||||
|
|
|
@ -1257,6 +1257,27 @@ int mv88e6xxx_port_set_8021q_mode(struct mv88e6xxx_chip *chip, int port,
|
|||
return 0;
|
||||
}
|
||||
|
||||
int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port,
|
||||
bool drop_untagged)
|
||||
{
|
||||
u16 old, new;
|
||||
int err;
|
||||
|
||||
err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, &old);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (drop_untagged)
|
||||
new = old | MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED;
|
||||
else
|
||||
new = old & ~MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED;
|
||||
|
||||
if (new == old)
|
||||
return 0;
|
||||
|
||||
return mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL2, new);
|
||||
}
|
||||
|
||||
int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port)
|
||||
{
|
||||
u16 reg;
|
||||
|
|
|
@ -423,6 +423,8 @@ int mv88e6393x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
|
|||
phy_interface_t mode);
|
||||
int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
|
||||
int mv88e6352_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode);
|
||||
int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port,
|
||||
bool drop_untagged);
|
||||
int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port);
|
||||
int mv88e6095_port_set_upstream_port(struct mv88e6xxx_chip *chip, int port,
|
||||
int upstream_port);
|
||||
|
|
|
@ -266,12 +266,12 @@ static void felix_8021q_cpu_port_deinit(struct ocelot *ocelot, int port)
|
|||
*/
|
||||
static int felix_setup_mmio_filtering(struct felix *felix)
|
||||
{
|
||||
unsigned long user_ports = 0, cpu_ports = 0;
|
||||
unsigned long user_ports = dsa_user_ports(felix->ds);
|
||||
struct ocelot_vcap_filter *redirect_rule;
|
||||
struct ocelot_vcap_filter *tagging_rule;
|
||||
struct ocelot *ocelot = &felix->ocelot;
|
||||
struct dsa_switch *ds = felix->ds;
|
||||
int port, ret;
|
||||
int cpu = -1, port, ret;
|
||||
|
||||
tagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL);
|
||||
if (!tagging_rule)
|
||||
|
@ -284,12 +284,15 @@ static int felix_setup_mmio_filtering(struct felix *felix)
|
|||
}
|
||||
|
||||
for (port = 0; port < ocelot->num_phys_ports; port++) {
|
||||
if (dsa_is_user_port(ds, port))
|
||||
user_ports |= BIT(port);
|
||||
if (dsa_is_cpu_port(ds, port))
|
||||
cpu_ports |= BIT(port);
|
||||
if (dsa_is_cpu_port(ds, port)) {
|
||||
cpu = port;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (cpu < 0)
|
||||
return -EINVAL;
|
||||
|
||||
tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE;
|
||||
*(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);
|
||||
*(__be16 *)tagging_rule->key.etype.etype.mask = htons(0xffff);
|
||||
|
@ -325,7 +328,7 @@ static int felix_setup_mmio_filtering(struct felix *felix)
|
|||
* the CPU port module
|
||||
*/
|
||||
redirect_rule->action.mask_mode = OCELOT_MASK_MODE_REDIRECT;
|
||||
redirect_rule->action.port_mask = cpu_ports;
|
||||
redirect_rule->action.port_mask = BIT(cpu);
|
||||
} else {
|
||||
/* Trap PTP packets only to the CPU port module (which is
|
||||
* redirected to the NPI port)
|
||||
|
@ -1074,6 +1077,101 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void ocelot_port_purge_txtstamp_skb(struct ocelot *ocelot, int port,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
struct ocelot_port *ocelot_port = ocelot->ports[port];
|
||||
struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
|
||||
struct sk_buff *skb_match = NULL, *skb_tmp;
|
||||
unsigned long flags;
|
||||
|
||||
if (!clone)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&ocelot_port->tx_skbs.lock, flags);
|
||||
|
||||
skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
|
||||
if (skb != clone)
|
||||
continue;
|
||||
__skb_unlink(skb, &ocelot_port->tx_skbs);
|
||||
skb_match = skb;
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&ocelot_port->tx_skbs.lock, flags);
|
||||
|
||||
WARN_ONCE(!skb_match,
|
||||
"Could not find skb clone in TX timestamping list\n");
|
||||
}
|
||||
|
||||
#define work_to_xmit_work(w) \
|
||||
container_of((w), struct felix_deferred_xmit_work, work)
|
||||
|
||||
static void felix_port_deferred_xmit(struct kthread_work *work)
|
||||
{
|
||||
struct felix_deferred_xmit_work *xmit_work = work_to_xmit_work(work);
|
||||
struct dsa_switch *ds = xmit_work->dp->ds;
|
||||
struct sk_buff *skb = xmit_work->skb;
|
||||
u32 rew_op = ocelot_ptp_rew_op(skb);
|
||||
struct ocelot *ocelot = ds->priv;
|
||||
int port = xmit_work->dp->index;
|
||||
int retries = 10;
|
||||
|
||||
do {
|
||||
if (ocelot_can_inject(ocelot, 0))
|
||||
break;
|
||||
|
||||
cpu_relax();
|
||||
} while (--retries);
|
||||
|
||||
if (!retries) {
|
||||
dev_err(ocelot->dev, "port %d failed to inject skb\n",
|
||||
port);
|
||||
ocelot_port_purge_txtstamp_skb(ocelot, port, skb);
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
|
||||
|
||||
consume_skb(skb);
|
||||
kfree(xmit_work);
|
||||
}
|
||||
|
||||
static int felix_port_setup_tagger_data(struct dsa_switch *ds, int port)
|
||||
{
|
||||
struct dsa_port *dp = dsa_to_port(ds, port);
|
||||
struct ocelot *ocelot = ds->priv;
|
||||
struct felix *felix = ocelot_to_felix(ocelot);
|
||||
struct felix_port *felix_port;
|
||||
|
||||
if (!dsa_port_is_user(dp))
|
||||
return 0;
|
||||
|
||||
felix_port = kzalloc(sizeof(*felix_port), GFP_KERNEL);
|
||||
if (!felix_port)
|
||||
return -ENOMEM;
|
||||
|
||||
felix_port->xmit_worker = felix->xmit_worker;
|
||||
felix_port->xmit_work_fn = felix_port_deferred_xmit;
|
||||
|
||||
dp->priv = felix_port;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void felix_port_teardown_tagger_data(struct dsa_switch *ds, int port)
|
||||
{
|
||||
struct dsa_port *dp = dsa_to_port(ds, port);
|
||||
struct felix_port *felix_port = dp->priv;
|
||||
|
||||
if (!felix_port)
|
||||
return;
|
||||
|
||||
dp->priv = NULL;
|
||||
kfree(felix_port);
|
||||
}
|
||||
|
||||
/* Hardware initialization done here so that we can allocate structures with
|
||||
* devm without fear of dsa_register_switch returning -EPROBE_DEFER and causing
|
||||
* us to allocate structures twice (leak memory) and map PCI memory twice
|
||||
|
@ -1102,6 +1200,12 @@ static int felix_setup(struct dsa_switch *ds)
|
|||
}
|
||||
}
|
||||
|
||||
felix->xmit_worker = kthread_create_worker(0, "felix_xmit");
|
||||
if (IS_ERR(felix->xmit_worker)) {
|
||||
err = PTR_ERR(felix->xmit_worker);
|
||||
goto out_deinit_timestamp;
|
||||
}
|
||||
|
||||
for (port = 0; port < ds->num_ports; port++) {
|
||||
if (dsa_is_unused_port(ds, port))
|
||||
continue;
|
||||
|
@ -1112,6 +1216,14 @@ static int felix_setup(struct dsa_switch *ds)
|
|||
* bits of vlan tag.
|
||||
*/
|
||||
felix_port_qos_map_init(ocelot, port);
|
||||
|
||||
err = felix_port_setup_tagger_data(ds, port);
|
||||
if (err) {
|
||||
dev_err(ds->dev,
|
||||
"port %d failed to set up tagger data: %pe\n",
|
||||
port, ERR_PTR(err));
|
||||
goto out_deinit_ports;
|
||||
}
|
||||
}
|
||||
|
||||
err = ocelot_devlink_sb_register(ocelot);
|
||||
|
@ -1126,6 +1238,7 @@ static int felix_setup(struct dsa_switch *ds)
|
|||
* there's no real point in checking for errors.
|
||||
*/
|
||||
felix_set_tag_protocol(ds, port, felix->tag_proto);
|
||||
break;
|
||||
}
|
||||
|
||||
ds->mtu_enforcement_ingress = true;
|
||||
|
@ -1138,9 +1251,13 @@ out_deinit_ports:
|
|||
if (dsa_is_unused_port(ds, port))
|
||||
continue;
|
||||
|
||||
felix_port_teardown_tagger_data(ds, port);
|
||||
ocelot_deinit_port(ocelot, port);
|
||||
}
|
||||
|
||||
kthread_destroy_worker(felix->xmit_worker);
|
||||
|
||||
out_deinit_timestamp:
|
||||
ocelot_deinit_timestamp(ocelot);
|
||||
ocelot_deinit(ocelot);
|
||||
|
||||
|
@ -1162,19 +1279,23 @@ static void felix_teardown(struct dsa_switch *ds)
|
|||
continue;
|
||||
|
||||
felix_del_tag_protocol(ds, port, felix->tag_proto);
|
||||
break;
|
||||
}
|
||||
|
||||
ocelot_devlink_sb_unregister(ocelot);
|
||||
ocelot_deinit_timestamp(ocelot);
|
||||
ocelot_deinit(ocelot);
|
||||
|
||||
for (port = 0; port < ocelot->num_phys_ports; port++) {
|
||||
if (dsa_is_unused_port(ds, port))
|
||||
continue;
|
||||
|
||||
felix_port_teardown_tagger_data(ds, port);
|
||||
ocelot_deinit_port(ocelot, port);
|
||||
}
|
||||
|
||||
kthread_destroy_worker(felix->xmit_worker);
|
||||
|
||||
ocelot_devlink_sb_unregister(ocelot);
|
||||
ocelot_deinit_timestamp(ocelot);
|
||||
ocelot_deinit(ocelot);
|
||||
|
||||
if (felix->info->mdio_bus_free)
|
||||
felix->info->mdio_bus_free(ocelot);
|
||||
}
|
||||
|
@ -1291,8 +1412,12 @@ static void felix_txtstamp(struct dsa_switch *ds, int port,
|
|||
if (!ocelot->ptp)
|
||||
return;
|
||||
|
||||
if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone))
|
||||
if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) {
|
||||
dev_err_ratelimited(ds->dev,
|
||||
"port %d delivering skb without TX timestamp\n",
|
||||
port);
|
||||
return;
|
||||
}
|
||||
|
||||
if (clone)
|
||||
OCELOT_SKB_CB(skb)->clone = clone;
|
||||
|
|
|
@ -62,6 +62,7 @@ struct felix {
|
|||
resource_size_t switch_base;
|
||||
resource_size_t imdio_base;
|
||||
enum dsa_tag_protocol tag_proto;
|
||||
struct kthread_worker *xmit_worker;
|
||||
};
|
||||
|
||||
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port);
|
||||
|
|
|
@ -3117,7 +3117,7 @@ static void sja1105_teardown(struct dsa_switch *ds)
|
|||
sja1105_static_config_free(&priv->static_config);
|
||||
}
|
||||
|
||||
const struct dsa_switch_ops sja1105_switch_ops = {
|
||||
static const struct dsa_switch_ops sja1105_switch_ops = {
|
||||
.get_tag_protocol = sja1105_get_tag_protocol,
|
||||
.setup = sja1105_setup,
|
||||
.teardown = sja1105_teardown,
|
||||
|
@ -3166,7 +3166,6 @@ const struct dsa_switch_ops sja1105_switch_ops = {
|
|||
.port_bridge_tx_fwd_offload = dsa_tag_8021q_bridge_tx_fwd_offload,
|
||||
.port_bridge_tx_fwd_unoffload = dsa_tag_8021q_bridge_tx_fwd_unoffload,
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(sja1105_switch_ops);
|
||||
|
||||
static const struct of_device_id sja1105_dt_ids[];
|
||||
|
||||
|
|
|
@ -64,6 +64,7 @@ enum sja1105_ptp_clk_mode {
|
|||
static int sja1105_change_rxtstamping(struct sja1105_private *priv,
|
||||
bool on)
|
||||
{
|
||||
struct sja1105_tagger_data *tagger_data = &priv->tagger_data;
|
||||
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
|
||||
struct sja1105_general_params_entry *general_params;
|
||||
struct sja1105_table *table;
|
||||
|
@ -79,7 +80,7 @@ static int sja1105_change_rxtstamping(struct sja1105_private *priv,
|
|||
priv->tagger_data.stampable_skb = NULL;
|
||||
}
|
||||
ptp_cancel_worker_sync(ptp_data->clock);
|
||||
skb_queue_purge(&ptp_data->skb_txtstamp_queue);
|
||||
skb_queue_purge(&tagger_data->skb_txtstamp_queue);
|
||||
skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
|
||||
|
||||
return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING);
|
||||
|
@ -452,40 +453,6 @@ bool sja1105_port_rxtstamp(struct dsa_switch *ds, int port,
|
|||
return priv->info->rxtstamp(ds, port, skb);
|
||||
}
|
||||
|
||||
void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id,
|
||||
enum sja1110_meta_tstamp dir, u64 tstamp)
|
||||
{
|
||||
struct sja1105_private *priv = ds->priv;
|
||||
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
|
||||
struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
|
||||
struct skb_shared_hwtstamps shwt = {0};
|
||||
|
||||
/* We don't care about RX timestamps on the CPU port */
|
||||
if (dir == SJA1110_META_TSTAMP_RX)
|
||||
return;
|
||||
|
||||
spin_lock(&ptp_data->skb_txtstamp_queue.lock);
|
||||
|
||||
skb_queue_walk_safe(&ptp_data->skb_txtstamp_queue, skb, skb_tmp) {
|
||||
if (SJA1105_SKB_CB(skb)->ts_id != ts_id)
|
||||
continue;
|
||||
|
||||
__skb_unlink(skb, &ptp_data->skb_txtstamp_queue);
|
||||
skb_match = skb;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock(&ptp_data->skb_txtstamp_queue.lock);
|
||||
|
||||
if (WARN_ON(!skb_match))
|
||||
return;
|
||||
|
||||
shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));
|
||||
skb_complete_tx_timestamp(skb_match, &shwt);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sja1110_process_meta_tstamp);
|
||||
|
||||
/* In addition to cloning the skb which is done by the common
|
||||
* sja1105_port_txtstamp, we need to generate a timestamp ID and save the
|
||||
* packet to the TX timestamping queue.
|
||||
|
@ -494,7 +461,6 @@ void sja1110_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb)
|
|||
{
|
||||
struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone;
|
||||
struct sja1105_private *priv = ds->priv;
|
||||
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
|
||||
struct sja1105_port *sp = &priv->ports[port];
|
||||
u8 ts_id;
|
||||
|
||||
|
@ -510,7 +476,7 @@ void sja1110_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb)
|
|||
|
||||
spin_unlock(&sp->data->meta_lock);
|
||||
|
||||
skb_queue_tail(&ptp_data->skb_txtstamp_queue, clone);
|
||||
skb_queue_tail(&sp->data->skb_txtstamp_queue, clone);
|
||||
}
|
||||
|
||||
/* Called from dsa_skb_tx_timestamp. This callback is just to clone
|
||||
|
@ -953,7 +919,7 @@ int sja1105_ptp_clock_register(struct dsa_switch *ds)
|
|||
/* Only used on SJA1105 */
|
||||
skb_queue_head_init(&ptp_data->skb_rxtstamp_queue);
|
||||
/* Only used on SJA1110 */
|
||||
skb_queue_head_init(&ptp_data->skb_txtstamp_queue);
|
||||
skb_queue_head_init(&tagger_data->skb_txtstamp_queue);
|
||||
spin_lock_init(&tagger_data->meta_lock);
|
||||
|
||||
ptp_data->clock = ptp_clock_register(&ptp_data->caps, ds->dev);
|
||||
|
@ -971,6 +937,7 @@ int sja1105_ptp_clock_register(struct dsa_switch *ds)
|
|||
void sja1105_ptp_clock_unregister(struct dsa_switch *ds)
|
||||
{
|
||||
struct sja1105_private *priv = ds->priv;
|
||||
struct sja1105_tagger_data *tagger_data = &priv->tagger_data;
|
||||
struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
|
||||
|
||||
if (IS_ERR_OR_NULL(ptp_data->clock))
|
||||
|
@ -978,7 +945,7 @@ void sja1105_ptp_clock_unregister(struct dsa_switch *ds)
|
|||
|
||||
del_timer_sync(&ptp_data->extts_timer);
|
||||
ptp_cancel_worker_sync(ptp_data->clock);
|
||||
skb_queue_purge(&ptp_data->skb_txtstamp_queue);
|
||||
skb_queue_purge(&tagger_data->skb_txtstamp_queue);
|
||||
skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
|
||||
ptp_clock_unregister(ptp_data->clock);
|
||||
ptp_data->clock = NULL;
|
||||
|
|
|
@ -8,21 +8,6 @@
|
|||
|
||||
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)
|
||||
|
||||
/* Timestamps are in units of 8 ns clock ticks (equivalent to
|
||||
* a fixed 125 MHz clock).
|
||||
*/
|
||||
#define SJA1105_TICK_NS 8
|
||||
|
||||
static inline s64 ns_to_sja1105_ticks(s64 ns)
|
||||
{
|
||||
return ns / SJA1105_TICK_NS;
|
||||
}
|
||||
|
||||
static inline s64 sja1105_ticks_to_ns(s64 ticks)
|
||||
{
|
||||
return ticks * SJA1105_TICK_NS;
|
||||
}
|
||||
|
||||
/* Calculate the first base_time in the future that satisfies this
|
||||
* relationship:
|
||||
*
|
||||
|
@ -77,10 +62,6 @@ struct sja1105_ptp_data {
|
|||
struct timer_list extts_timer;
|
||||
/* Used only on SJA1105 to reconstruct partial timestamps */
|
||||
struct sk_buff_head skb_rxtstamp_queue;
|
||||
/* Used on SJA1110 where meta frames are generated only for
|
||||
* 2-step TX timestamps
|
||||
*/
|
||||
struct sk_buff_head skb_txtstamp_queue;
|
||||
struct ptp_clock_info caps;
|
||||
struct ptp_clock *clock;
|
||||
struct sja1105_ptp_cmd cmd;
|
||||
|
|
|
@ -100,6 +100,7 @@ config JME
|
|||
config KORINA
|
||||
tristate "Korina (IDT RC32434) Ethernet support"
|
||||
depends on MIKROTIK_RB532 || COMPILE_TEST
|
||||
select CRC32
|
||||
select MII
|
||||
help
|
||||
If you have a Mikrotik RouterBoard 500 or IDT RC32434
|
||||
|
|
|
@ -21,6 +21,7 @@ config ARC_EMAC_CORE
|
|||
depends on ARC || ARCH_ROCKCHIP || COMPILE_TEST
|
||||
select MII
|
||||
select PHYLIB
|
||||
select CRC32
|
||||
|
||||
config ARC_EMAC
|
||||
tristate "ARC EMAC support"
|
||||
|
|
|
@ -1313,22 +1313,21 @@ ice_ptp_flush_tx_tracker(struct ice_pf *pf, struct ice_ptp_tx *tx)
|
|||
{
|
||||
u8 idx;
|
||||
|
||||
spin_lock(&tx->lock);
|
||||
|
||||
for (idx = 0; idx < tx->len; idx++) {
|
||||
u8 phy_idx = idx + tx->quad_offset;
|
||||
|
||||
/* Clear any potential residual timestamp in the PHY block */
|
||||
if (!pf->hw.reset_ongoing)
|
||||
ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
|
||||
|
||||
spin_lock(&tx->lock);
|
||||
if (tx->tstamps[idx].skb) {
|
||||
dev_kfree_skb_any(tx->tstamps[idx].skb);
|
||||
tx->tstamps[idx].skb = NULL;
|
||||
}
|
||||
}
|
||||
clear_bit(idx, tx->in_use);
|
||||
spin_unlock(&tx->lock);
|
||||
|
||||
spin_unlock(&tx->lock);
|
||||
/* Clear any potential residual timestamp in the PHY block */
|
||||
if (!pf->hw.reset_ongoing)
|
||||
ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -155,6 +155,8 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
|
|||
u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {};
|
||||
int err;
|
||||
|
||||
mlx5_debug_cq_remove(dev, cq);
|
||||
|
||||
mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq);
|
||||
mlx5_eq_del_cq(&cq->eq->core, cq);
|
||||
|
||||
|
@ -162,16 +164,13 @@ int mlx5_core_destroy_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq)
|
|||
MLX5_SET(destroy_cq_in, in, cqn, cq->cqn);
|
||||
MLX5_SET(destroy_cq_in, in, uid, cq->uid);
|
||||
err = mlx5_cmd_exec_in(dev, destroy_cq, in);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
synchronize_irq(cq->irqn);
|
||||
|
||||
mlx5_debug_cq_remove(dev, cq);
|
||||
mlx5_cq_put(cq);
|
||||
wait_for_completion(&cq->free);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL(mlx5_core_destroy_cq);
|
||||
|
||||
|
|
|
@ -475,9 +475,6 @@ void mlx5e_rep_bridge_init(struct mlx5e_priv *priv)
|
|||
esw_warn(mdev, "Failed to allocate bridge offloads workqueue\n");
|
||||
goto err_alloc_wq;
|
||||
}
|
||||
INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work);
|
||||
queue_delayed_work(br_offloads->wq, &br_offloads->update_work,
|
||||
msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL));
|
||||
|
||||
br_offloads->nb.notifier_call = mlx5_esw_bridge_switchdev_event;
|
||||
err = register_switchdev_notifier(&br_offloads->nb);
|
||||
|
@ -500,6 +497,9 @@ void mlx5e_rep_bridge_init(struct mlx5e_priv *priv)
|
|||
err);
|
||||
goto err_register_netdev;
|
||||
}
|
||||
INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work);
|
||||
queue_delayed_work(br_offloads->wq, &br_offloads->update_work,
|
||||
msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL));
|
||||
return;
|
||||
|
||||
err_register_netdev:
|
||||
|
@ -523,10 +523,10 @@ void mlx5e_rep_bridge_cleanup(struct mlx5e_priv *priv)
|
|||
if (!br_offloads)
|
||||
return;
|
||||
|
||||
cancel_delayed_work_sync(&br_offloads->update_work);
|
||||
unregister_netdevice_notifier(&br_offloads->netdev_nb);
|
||||
unregister_switchdev_blocking_notifier(&br_offloads->nb_blk);
|
||||
unregister_switchdev_notifier(&br_offloads->nb);
|
||||
cancel_delayed_work(&br_offloads->update_work);
|
||||
destroy_workqueue(br_offloads->wq);
|
||||
rtnl_lock();
|
||||
mlx5_esw_bridge_cleanup(esw);
|
||||
|
|
|
@ -2981,8 +2981,8 @@ static int mlx5e_mqprio_channel_validate(struct mlx5e_priv *priv,
|
|||
agg_count += mqprio->qopt.count[i];
|
||||
}
|
||||
|
||||
if (priv->channels.params.num_channels < agg_count) {
|
||||
netdev_err(netdev, "Num of queues (%d) exceeds available (%d)\n",
|
||||
if (priv->channels.params.num_channels != agg_count) {
|
||||
netdev_err(netdev, "Num of queues (%d) does not match available (%d)\n",
|
||||
agg_count, priv->channels.params.num_channels);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -3325,20 +3325,67 @@ static int set_feature_rx_all(struct net_device *netdev, bool enable)
|
|||
return mlx5_set_port_fcs(mdev, !enable);
|
||||
}
|
||||
|
||||
static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)
|
||||
{
|
||||
u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {};
|
||||
bool supported, curr_state;
|
||||
int err;
|
||||
|
||||
if (!MLX5_CAP_GEN(mdev, ports_check))
|
||||
return 0;
|
||||
|
||||
err = mlx5_query_ports_check(mdev, in, sizeof(in));
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap);
|
||||
curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc);
|
||||
|
||||
if (!supported || enable == curr_state)
|
||||
return 0;
|
||||
|
||||
MLX5_SET(pcmr_reg, in, local_port, 1);
|
||||
MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable);
|
||||
|
||||
return mlx5_set_ports_check(mdev, in, sizeof(in));
|
||||
}
|
||||
|
||||
static int set_feature_rx_fcs(struct net_device *netdev, bool enable)
|
||||
{
|
||||
struct mlx5e_priv *priv = netdev_priv(netdev);
|
||||
struct mlx5e_channels *chs = &priv->channels;
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
int err;
|
||||
|
||||
mutex_lock(&priv->state_lock);
|
||||
|
||||
priv->channels.params.scatter_fcs_en = enable;
|
||||
err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable);
|
||||
if (err)
|
||||
priv->channels.params.scatter_fcs_en = !enable;
|
||||
if (enable) {
|
||||
err = mlx5e_set_rx_port_ts(mdev, false);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
chs->params.scatter_fcs_en = true;
|
||||
err = mlx5e_modify_channels_scatter_fcs(chs, true);
|
||||
if (err) {
|
||||
chs->params.scatter_fcs_en = false;
|
||||
mlx5e_set_rx_port_ts(mdev, true);
|
||||
}
|
||||
} else {
|
||||
chs->params.scatter_fcs_en = false;
|
||||
err = mlx5e_modify_channels_scatter_fcs(chs, false);
|
||||
if (err) {
|
||||
chs->params.scatter_fcs_en = true;
|
||||
goto out;
|
||||
}
|
||||
err = mlx5e_set_rx_port_ts(mdev, true);
|
||||
if (err) {
|
||||
mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);
|
||||
err = 0;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mutex_unlock(&priv->state_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -618,6 +618,11 @@ static void mlx5e_build_rep_params(struct net_device *netdev)
|
|||
params->mqprio.num_tc = 1;
|
||||
params->tunneled_offload_en = false;
|
||||
|
||||
/* Set an initial non-zero value, so that mlx5e_select_queue won't
|
||||
* divide by zero if called before first activating channels.
|
||||
*/
|
||||
priv->num_tc_x_num_ch = params->num_channels * params->mqprio.num_tc;
|
||||
|
||||
mlx5_query_min_inline(mdev, ¶ms->tx_min_inline_mode);
|
||||
}
|
||||
|
||||
|
@ -643,7 +648,6 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev,
|
|||
netdev->hw_features |= NETIF_F_RXCSUM;
|
||||
|
||||
netdev->features |= netdev->hw_features;
|
||||
netdev->features |= NETIF_F_VLAN_CHALLENGED;
|
||||
netdev->features |= NETIF_F_NETNS_LOCAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -24,16 +24,8 @@
|
|||
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
|
||||
#define MLXSW_THERMAL_TEMP_SCORE_MAX GENMASK(31, 0)
|
||||
#define MLXSW_THERMAL_MAX_STATE 10
|
||||
#define MLXSW_THERMAL_MIN_STATE 2
|
||||
#define MLXSW_THERMAL_MAX_DUTY 255
|
||||
/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values
|
||||
* MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for
|
||||
* setting fan speed dynamic minimum. For example, if value is set to 14 (40%)
|
||||
* cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to
|
||||
* introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100.
|
||||
*/
|
||||
#define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2)
|
||||
#define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2)
|
||||
#define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */
|
||||
|
||||
/* External cooling devices, allowed for binding to mlxsw thermal zones. */
|
||||
static char * const mlxsw_thermal_external_allowed_cdev[] = {
|
||||
|
@ -646,49 +638,16 @@ static int mlxsw_thermal_set_cur_state(struct thermal_cooling_device *cdev,
|
|||
struct mlxsw_thermal *thermal = cdev->devdata;
|
||||
struct device *dev = thermal->bus_info->dev;
|
||||
char mfsc_pl[MLXSW_REG_MFSC_LEN];
|
||||
unsigned long cur_state, i;
|
||||
int idx;
|
||||
u8 duty;
|
||||
int err;
|
||||
|
||||
if (state > MLXSW_THERMAL_MAX_STATE)
|
||||
return -EINVAL;
|
||||
|
||||
idx = mlxsw_get_cooling_device_idx(thermal, cdev);
|
||||
if (idx < 0)
|
||||
return idx;
|
||||
|
||||
/* Verify if this request is for changing allowed fan dynamical
|
||||
* minimum. If it is - update cooling levels accordingly and update
|
||||
* state, if current state is below the newly requested minimum state.
|
||||
* For example, if current state is 5, and minimal state is to be
|
||||
* changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed
|
||||
* all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be
|
||||
* overwritten.
|
||||
*/
|
||||
if (state >= MLXSW_THERMAL_SPEED_MIN &&
|
||||
state <= MLXSW_THERMAL_SPEED_MAX) {
|
||||
state -= MLXSW_THERMAL_MAX_STATE;
|
||||
for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++)
|
||||
thermal->cooling_levels[i] = max(state, i);
|
||||
|
||||
mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0);
|
||||
err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl);
|
||||
cur_state = mlxsw_duty_to_state(duty);
|
||||
|
||||
/* If current fan state is lower than requested dynamical
|
||||
* minimum, increase fan speed up to dynamical minimum.
|
||||
*/
|
||||
if (state < cur_state)
|
||||
return 0;
|
||||
|
||||
state = cur_state;
|
||||
}
|
||||
|
||||
if (state > MLXSW_THERMAL_MAX_STATE)
|
||||
return -EINVAL;
|
||||
|
||||
/* Normalize the state to the valid speed range. */
|
||||
state = thermal->cooling_levels[state];
|
||||
mlxsw_reg_mfsc_pack(mfsc_pl, idx, mlxsw_state_to_duty(state));
|
||||
|
@ -998,8 +957,7 @@ int mlxsw_thermal_init(struct mlxsw_core *core,
|
|||
|
||||
/* Initialize cooling levels per PWM state. */
|
||||
for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++)
|
||||
thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL,
|
||||
i);
|
||||
thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i);
|
||||
|
||||
thermal->polling_delay = bus_info->low_frequency ?
|
||||
MLXSW_THERMAL_SLOW_POLL_INT :
|
||||
|
|
|
@ -497,13 +497,19 @@ static struct regmap_bus phymap_encx24j600 = {
|
|||
.reg_read = regmap_encx24j600_phy_reg_read,
|
||||
};
|
||||
|
||||
void devm_regmap_init_encx24j600(struct device *dev,
|
||||
struct encx24j600_context *ctx)
|
||||
int devm_regmap_init_encx24j600(struct device *dev,
|
||||
struct encx24j600_context *ctx)
|
||||
{
|
||||
mutex_init(&ctx->mutex);
|
||||
regcfg.lock_arg = ctx;
|
||||
ctx->regmap = devm_regmap_init(dev, ®map_encx24j600, ctx, ®cfg);
|
||||
if (IS_ERR(ctx->regmap))
|
||||
return PTR_ERR(ctx->regmap);
|
||||
ctx->phymap = devm_regmap_init(dev, &phymap_encx24j600, ctx, &phycfg);
|
||||
if (IS_ERR(ctx->phymap))
|
||||
return PTR_ERR(ctx->phymap);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_regmap_init_encx24j600);
|
||||
|
||||
|
|
|
@ -1023,10 +1023,13 @@ static int encx24j600_spi_probe(struct spi_device *spi)
|
|||
priv->speed = SPEED_100;
|
||||
|
||||
priv->ctx.spi = spi;
|
||||
devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
|
||||
ndev->irq = spi->irq;
|
||||
ndev->netdev_ops = &encx24j600_netdev_ops;
|
||||
|
||||
ret = devm_regmap_init_encx24j600(&spi->dev, &priv->ctx);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
mutex_init(&priv->lock);
|
||||
|
||||
/* Reset device and check if it is connected */
|
||||
|
|
|
@ -15,8 +15,8 @@ struct encx24j600_context {
|
|||
int bank;
|
||||
};
|
||||
|
||||
void devm_regmap_init_encx24j600(struct device *dev,
|
||||
struct encx24j600_context *ctx);
|
||||
int devm_regmap_init_encx24j600(struct device *dev,
|
||||
struct encx24j600_context *ctx);
|
||||
|
||||
/* Single-byte instructions */
|
||||
#define BANK_SELECT(bank) (0xC0 | ((bank & (BANK_MASK >> BANK_SHIFT)) << 1))
|
||||
|
|
|
@ -1477,8 +1477,10 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc,
|
|||
if (err)
|
||||
goto out;
|
||||
|
||||
if (cq->gdma_id >= gc->max_num_cqs)
|
||||
if (WARN_ON(cq->gdma_id >= gc->max_num_cqs)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
gc->cq_table[cq->gdma_id] = cq->gdma_cq;
|
||||
|
||||
|
|
|
@ -472,9 +472,9 @@ void ocelot_phylink_mac_link_down(struct ocelot *ocelot, int port,
|
|||
!(quirks & OCELOT_QUIRK_QSGMII_PORTS_MUST_BE_UP))
|
||||
ocelot_port_rmwl(ocelot_port,
|
||||
DEV_CLOCK_CFG_MAC_TX_RST |
|
||||
DEV_CLOCK_CFG_MAC_TX_RST,
|
||||
DEV_CLOCK_CFG_MAC_RX_RST,
|
||||
DEV_CLOCK_CFG_MAC_TX_RST |
|
||||
DEV_CLOCK_CFG_MAC_TX_RST,
|
||||
DEV_CLOCK_CFG_MAC_RX_RST,
|
||||
DEV_CLOCK_CFG);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_down);
|
||||
|
@ -569,49 +569,44 @@ void ocelot_phylink_mac_link_up(struct ocelot *ocelot, int port,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_up);
|
||||
|
||||
static void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
|
||||
struct sk_buff *clone)
|
||||
static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
|
||||
struct sk_buff *clone)
|
||||
{
|
||||
struct ocelot_port *ocelot_port = ocelot->ports[port];
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock(&ocelot_port->ts_id_lock);
|
||||
spin_lock_irqsave(&ocelot->ts_id_lock, flags);
|
||||
|
||||
if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID ||
|
||||
ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
|
||||
spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
|
||||
/* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
|
||||
OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id;
|
||||
ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4;
|
||||
|
||||
ocelot_port->ts_id++;
|
||||
if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID)
|
||||
ocelot_port->ts_id = 0;
|
||||
|
||||
ocelot_port->ptp_skbs_in_flight++;
|
||||
ocelot->ptp_skbs_in_flight++;
|
||||
|
||||
skb_queue_tail(&ocelot_port->tx_skbs, clone);
|
||||
|
||||
spin_unlock(&ocelot_port->ts_id_lock);
|
||||
spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
u32 ocelot_ptp_rew_op(struct sk_buff *skb)
|
||||
{
|
||||
struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
|
||||
u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
|
||||
u32 rew_op = 0;
|
||||
|
||||
if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
|
||||
rew_op = ptp_cmd;
|
||||
rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
|
||||
} else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
|
||||
rew_op = ptp_cmd;
|
||||
}
|
||||
|
||||
return rew_op;
|
||||
}
|
||||
EXPORT_SYMBOL(ocelot_ptp_rew_op);
|
||||
|
||||
static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb)
|
||||
static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb,
|
||||
unsigned int ptp_class)
|
||||
{
|
||||
struct ptp_header *hdr;
|
||||
unsigned int ptp_class;
|
||||
u8 msgtype, twostep;
|
||||
|
||||
ptp_class = ptp_classify_raw(skb);
|
||||
if (ptp_class == PTP_CLASS_NONE)
|
||||
return false;
|
||||
|
||||
hdr = ptp_parse_header(skb, ptp_class);
|
||||
if (!hdr)
|
||||
return false;
|
||||
|
@ -631,10 +626,20 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
|
|||
{
|
||||
struct ocelot_port *ocelot_port = ocelot->ports[port];
|
||||
u8 ptp_cmd = ocelot_port->ptp_cmd;
|
||||
unsigned int ptp_class;
|
||||
int err;
|
||||
|
||||
/* Don't do anything if PTP timestamping not enabled */
|
||||
if (!ptp_cmd)
|
||||
return 0;
|
||||
|
||||
ptp_class = ptp_classify_raw(skb);
|
||||
if (ptp_class == PTP_CLASS_NONE)
|
||||
return -EINVAL;
|
||||
|
||||
/* Store ptp_cmd in OCELOT_SKB_CB(skb)->ptp_cmd */
|
||||
if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
|
||||
if (ocelot_ptp_is_onestep_sync(skb)) {
|
||||
if (ocelot_ptp_is_onestep_sync(skb, ptp_class)) {
|
||||
OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
|
||||
return 0;
|
||||
}
|
||||
|
@ -648,8 +653,12 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
|
|||
if (!(*clone))
|
||||
return -ENOMEM;
|
||||
|
||||
ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
|
||||
err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
|
||||
OCELOT_SKB_CB(*clone)->ptp_class = ptp_class;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -683,6 +692,17 @@ static void ocelot_get_hwtimestamp(struct ocelot *ocelot,
|
|||
spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
|
||||
}
|
||||
|
||||
static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid)
|
||||
{
|
||||
struct ptp_header *hdr;
|
||||
|
||||
hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class);
|
||||
if (WARN_ON(!hdr))
|
||||
return false;
|
||||
|
||||
return seqid == ntohs(hdr->sequence_id);
|
||||
}
|
||||
|
||||
void ocelot_get_txtstamp(struct ocelot *ocelot)
|
||||
{
|
||||
int budget = OCELOT_PTP_QUEUE_SZ;
|
||||
|
@ -690,10 +710,10 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
|
|||
while (budget--) {
|
||||
struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
|
||||
struct skb_shared_hwtstamps shhwtstamps;
|
||||
u32 val, id, seqid, txport;
|
||||
struct ocelot_port *port;
|
||||
struct timespec64 ts;
|
||||
unsigned long flags;
|
||||
u32 val, id, txport;
|
||||
|
||||
val = ocelot_read(ocelot, SYS_PTP_STATUS);
|
||||
|
||||
|
@ -706,10 +726,17 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
|
|||
/* Retrieve the ts ID and Tx port */
|
||||
id = SYS_PTP_STATUS_PTP_MESS_ID_X(val);
|
||||
txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val);
|
||||
seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val);
|
||||
|
||||
/* Retrieve its associated skb */
|
||||
port = ocelot->ports[txport];
|
||||
|
||||
spin_lock(&ocelot->ts_id_lock);
|
||||
port->ptp_skbs_in_flight--;
|
||||
ocelot->ptp_skbs_in_flight--;
|
||||
spin_unlock(&ocelot->ts_id_lock);
|
||||
|
||||
/* Retrieve its associated skb */
|
||||
try_again:
|
||||
spin_lock_irqsave(&port->tx_skbs.lock, flags);
|
||||
|
||||
skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) {
|
||||
|
@ -722,12 +749,20 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
|
|||
|
||||
spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
|
||||
|
||||
if (WARN_ON(!skb_match))
|
||||
continue;
|
||||
|
||||
if (!ocelot_validate_ptp_skb(skb_match, seqid)) {
|
||||
dev_err_ratelimited(ocelot->dev,
|
||||
"port %d received stale TX timestamp for seqid %d, discarding\n",
|
||||
txport, seqid);
|
||||
dev_kfree_skb_any(skb);
|
||||
goto try_again;
|
||||
}
|
||||
|
||||
/* Get the h/w timestamp */
|
||||
ocelot_get_hwtimestamp(ocelot, &ts);
|
||||
|
||||
if (unlikely(!skb_match))
|
||||
continue;
|
||||
|
||||
/* Set the timestamp into the skb */
|
||||
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
|
||||
shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
|
||||
|
@ -1948,7 +1983,6 @@ void ocelot_init_port(struct ocelot *ocelot, int port)
|
|||
struct ocelot_port *ocelot_port = ocelot->ports[port];
|
||||
|
||||
skb_queue_head_init(&ocelot_port->tx_skbs);
|
||||
spin_lock_init(&ocelot_port->ts_id_lock);
|
||||
|
||||
/* Basic L2 initialization */
|
||||
|
||||
|
@ -2081,6 +2115,7 @@ int ocelot_init(struct ocelot *ocelot)
|
|||
mutex_init(&ocelot->stats_lock);
|
||||
mutex_init(&ocelot->ptp_lock);
|
||||
spin_lock_init(&ocelot->ptp_clock_lock);
|
||||
spin_lock_init(&ocelot->ts_id_lock);
|
||||
snprintf(queue_name, sizeof(queue_name), "%s-stats",
|
||||
dev_name(ocelot->dev));
|
||||
ocelot->stats_queue = create_singlethread_workqueue(queue_name);
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* Copyright 2020-2021 NXP
|
||||
*/
|
||||
|
||||
#include <linux/dsa/ocelot.h>
|
||||
#include <linux/if_bridge.h>
|
||||
#include <linux/of_net.h>
|
||||
#include <linux/phy/phy.h>
|
||||
|
@ -1625,7 +1626,7 @@ static int ocelot_port_phylink_create(struct ocelot *ocelot, int port,
|
|||
if (phy_mode == PHY_INTERFACE_MODE_QSGMII)
|
||||
ocelot_port_rmwl(ocelot_port, 0,
|
||||
DEV_CLOCK_CFG_MAC_TX_RST |
|
||||
DEV_CLOCK_CFG_MAC_TX_RST,
|
||||
DEV_CLOCK_CFG_MAC_RX_RST,
|
||||
DEV_CLOCK_CFG);
|
||||
|
||||
ocelot_port->phy_mode = phy_mode;
|
||||
|
|
|
@ -8566,7 +8566,7 @@ static void s2io_io_resume(struct pci_dev *pdev)
|
|||
return;
|
||||
}
|
||||
|
||||
if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {
|
||||
if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {
|
||||
s2io_card_down(sp);
|
||||
pr_err("Can't restore mac addr after reset.\n");
|
||||
return;
|
||||
|
|
|
@ -830,10 +830,6 @@ static int nfp_flower_init(struct nfp_app *app)
|
|||
if (err)
|
||||
goto err_cleanup;
|
||||
|
||||
err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
|
||||
if (err)
|
||||
goto err_cleanup;
|
||||
|
||||
if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)
|
||||
nfp_flower_qos_init(app);
|
||||
|
||||
|
@ -942,7 +938,20 @@ static int nfp_flower_start(struct nfp_app *app)
|
|||
return err;
|
||||
}
|
||||
|
||||
return nfp_tunnel_config_start(app);
|
||||
err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = nfp_tunnel_config_start(app);
|
||||
if (err)
|
||||
goto err_tunnel_config;
|
||||
|
||||
return 0;
|
||||
|
||||
err_tunnel_config:
|
||||
flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,
|
||||
nfp_flower_setup_indr_tc_release);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void nfp_flower_stop(struct nfp_app *app)
|
||||
|
|
|
@ -1379,6 +1379,10 @@ static int ionic_addr_add(struct net_device *netdev, const u8 *addr)
|
|||
|
||||
static int ionic_addr_del(struct net_device *netdev, const u8 *addr)
|
||||
{
|
||||
/* Don't delete our own address from the uc list */
|
||||
if (ether_addr_equal(addr, netdev->dev_addr))
|
||||
return 0;
|
||||
|
||||
return ionic_lif_list_addr(netdev_priv(netdev), addr, DEL_ADDR);
|
||||
}
|
||||
|
||||
|
|
|
@ -1299,6 +1299,7 @@ static int qed_slowpath_start(struct qed_dev *cdev,
|
|||
} else {
|
||||
DP_NOTICE(cdev,
|
||||
"Failed to acquire PTT for aRFS\n");
|
||||
rc = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -71,6 +71,7 @@ err_remove_config_dt:
|
|||
|
||||
static const struct of_device_id dwmac_generic_match[] = {
|
||||
{ .compatible = "st,spear600-gmac"},
|
||||
{ .compatible = "snps,dwmac-3.40a"},
|
||||
{ .compatible = "snps,dwmac-3.50a"},
|
||||
{ .compatible = "snps,dwmac-3.610"},
|
||||
{ .compatible = "snps,dwmac-3.70a"},
|
||||
|
|
|
@ -218,11 +218,18 @@ static void dwmac1000_dump_dma_regs(void __iomem *ioaddr, u32 *reg_space)
|
|||
readl(ioaddr + DMA_BUS_MODE + i * 4);
|
||||
}
|
||||
|
||||
static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
static int dwmac1000_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
{
|
||||
u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
|
||||
|
||||
if (!hw_cap) {
|
||||
/* 0x00000000 is the value read on old hardware that does not
|
||||
* implement this register
|
||||
*/
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
|
||||
dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
|
||||
dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
|
||||
|
@ -252,6 +259,8 @@ static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
|
|||
dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
|
||||
/* Alternate (enhanced) DESC mode */
|
||||
dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt,
|
||||
|
|
|
@ -347,8 +347,8 @@ static void dwmac4_dma_tx_chan_op_mode(void __iomem *ioaddr, int mode,
|
|||
writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel));
|
||||
}
|
||||
|
||||
static void dwmac4_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
static int dwmac4_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
{
|
||||
u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
|
||||
|
||||
|
@ -437,6 +437,8 @@ static void dwmac4_get_hw_feature(void __iomem *ioaddr,
|
|||
dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11;
|
||||
dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10;
|
||||
dma_cap->dvlan = (hw_cap & GMAC_HW_FEAT_DVLAN) >> 5;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Enable/disable TSO feature and set MSS */
|
||||
|
|
|
@ -371,8 +371,8 @@ static int dwxgmac2_dma_interrupt(void __iomem *ioaddr,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
static int dwxgmac2_get_hw_feature(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap)
|
||||
{
|
||||
u32 hw_cap;
|
||||
|
||||
|
@ -445,6 +445,8 @@ static void dwxgmac2_get_hw_feature(void __iomem *ioaddr,
|
|||
dma_cap->frpes = (hw_cap & XGMAC_HWFEAT_FRPES) >> 11;
|
||||
dma_cap->frpbs = (hw_cap & XGMAC_HWFEAT_FRPPB) >> 9;
|
||||
dma_cap->frpsel = (hw_cap & XGMAC_HWFEAT_FRPSEL) >> 3;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 queue)
|
||||
|
|
|
@ -203,8 +203,8 @@ struct stmmac_dma_ops {
|
|||
int (*dma_interrupt) (void __iomem *ioaddr,
|
||||
struct stmmac_extra_stats *x, u32 chan, u32 dir);
|
||||
/* If supported then get the optional core features */
|
||||
void (*get_hw_feature)(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap);
|
||||
int (*get_hw_feature)(void __iomem *ioaddr,
|
||||
struct dma_features *dma_cap);
|
||||
/* Program the HW RX Watchdog */
|
||||
void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt, u32 queue);
|
||||
void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len, u32 chan);
|
||||
|
@ -255,7 +255,7 @@ struct stmmac_dma_ops {
|
|||
#define stmmac_dma_interrupt_status(__priv, __args...) \
|
||||
stmmac_do_callback(__priv, dma, dma_interrupt, __args)
|
||||
#define stmmac_get_hw_feature(__priv, __args...) \
|
||||
stmmac_do_void_callback(__priv, dma, get_hw_feature, __args)
|
||||
stmmac_do_callback(__priv, dma, get_hw_feature, __args)
|
||||
#define stmmac_rx_watchdog(__priv, __args...) \
|
||||
stmmac_do_void_callback(__priv, dma, rx_watchdog, __args)
|
||||
#define stmmac_set_tx_ring_len(__priv, __args...) \
|
||||
|
|
|
@ -508,6 +508,14 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
|
|||
plat->pmt = 1;
|
||||
}
|
||||
|
||||
if (of_device_is_compatible(np, "snps,dwmac-3.40a")) {
|
||||
plat->has_gmac = 1;
|
||||
plat->enh_desc = 1;
|
||||
plat->tx_coe = 1;
|
||||
plat->bugged_jumbo = 1;
|
||||
plat->pmt = 1;
|
||||
}
|
||||
|
||||
if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
|
||||
of_device_is_compatible(np, "snps,dwmac-4.10a") ||
|
||||
of_device_is_compatible(np, "snps,dwmac-4.20a") ||
|
||||
|
|
|
@ -3125,6 +3125,9 @@ static void phy_shutdown(struct device *dev)
|
|||
{
|
||||
struct phy_device *phydev = to_phy_device(dev);
|
||||
|
||||
if (phydev->state == PHY_READY || !phydev->attached_dev)
|
||||
return;
|
||||
|
||||
phy_disable_interrupts(phydev);
|
||||
}
|
||||
|
||||
|
|
|
@ -99,6 +99,10 @@ config USB_RTL8150
|
|||
config USB_RTL8152
|
||||
tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"
|
||||
select MII
|
||||
select CRC32
|
||||
select CRYPTO
|
||||
select CRYPTO_HASH
|
||||
select CRYPTO_SHA256
|
||||
help
|
||||
This option adds support for Realtek RTL8152 based USB 2.0
|
||||
10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
|
||||
|
|
|
@ -406,7 +406,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
|
|||
* add_recvbuf_mergeable() + get_mergeable_buf_len()
|
||||
*/
|
||||
truesize = headroom ? PAGE_SIZE : truesize;
|
||||
tailroom = truesize - len - headroom;
|
||||
tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len);
|
||||
buf = p - headroom;
|
||||
|
||||
len -= hdr_len;
|
||||
|
|
13
include/linux/dsa/mv88e6xxx.h
Normal file
13
include/linux/dsa/mv88e6xxx.h
Normal file
|
@ -0,0 +1,13 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0
|
||||
* Copyright 2021 NXP
|
||||
*/
|
||||
|
||||
#ifndef _NET_DSA_TAG_MV88E6XXX_H
|
||||
#define _NET_DSA_TAG_MV88E6XXX_H
|
||||
|
||||
#include <linux/if_vlan.h>
|
||||
|
||||
#define MV88E6XXX_VID_STANDALONE 0
|
||||
#define MV88E6XXX_VID_BRIDGED (VLAN_N_VID - 1)
|
||||
|
||||
#endif
|
|
@ -5,7 +5,28 @@
|
|||
#ifndef _NET_DSA_TAG_OCELOT_H
|
||||
#define _NET_DSA_TAG_OCELOT_H
|
||||
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/packing.h>
|
||||
#include <linux/skbuff.h>
|
||||
|
||||
struct ocelot_skb_cb {
|
||||
struct sk_buff *clone;
|
||||
unsigned int ptp_class; /* valid only for clones */
|
||||
u8 ptp_cmd;
|
||||
u8 ts_id;
|
||||
};
|
||||
|
||||
#define OCELOT_SKB_CB(skb) \
|
||||
((struct ocelot_skb_cb *)((skb)->cb))
|
||||
|
||||
#define IFH_TAG_TYPE_C 0
|
||||
#define IFH_TAG_TYPE_S 1
|
||||
|
||||
#define IFH_REW_OP_NOOP 0x0
|
||||
#define IFH_REW_OP_DSCP 0x1
|
||||
#define IFH_REW_OP_ONE_STEP_PTP 0x2
|
||||
#define IFH_REW_OP_TWO_STEP_PTP 0x3
|
||||
#define IFH_REW_OP_ORIGIN_PTP 0x5
|
||||
|
||||
#define OCELOT_TAG_LEN 16
|
||||
#define OCELOT_SHORT_PREFIX_LEN 4
|
||||
|
@ -140,6 +161,17 @@
|
|||
* +------+------+------+------+------+------+------+------+
|
||||
*/
|
||||
|
||||
struct felix_deferred_xmit_work {
|
||||
struct dsa_port *dp;
|
||||
struct sk_buff *skb;
|
||||
struct kthread_work work;
|
||||
};
|
||||
|
||||
struct felix_port {
|
||||
void (*xmit_work_fn)(struct kthread_work *work);
|
||||
struct kthread_worker *xmit_worker;
|
||||
};
|
||||
|
||||
static inline void ocelot_xfh_get_rew_val(void *extraction, u64 *rew_val)
|
||||
{
|
||||
packing(extraction, rew_val, 116, 85, OCELOT_TAG_LEN, UNPACK, 0);
|
||||
|
@ -215,4 +247,21 @@ static inline void ocelot_ifh_set_vid(void *injection, u64 vid)
|
|||
packing(injection, &vid, 11, 0, OCELOT_TAG_LEN, PACK, 0);
|
||||
}
|
||||
|
||||
/* Determine the PTP REW_OP to use for injecting the given skb */
|
||||
static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
|
||||
{
|
||||
struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone;
|
||||
u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd;
|
||||
u32 rew_op = 0;
|
||||
|
||||
if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) {
|
||||
rew_op = ptp_cmd;
|
||||
rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3;
|
||||
} else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) {
|
||||
rew_op = ptp_cmd;
|
||||
}
|
||||
|
||||
return rew_op;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
|
|
@ -48,6 +48,10 @@ struct sja1105_tagger_data {
|
|||
spinlock_t meta_lock;
|
||||
unsigned long state;
|
||||
u8 ts_id;
|
||||
/* Used on SJA1110 where meta frames are generated only for
|
||||
* 2-step TX timestamps
|
||||
*/
|
||||
struct sk_buff_head skb_txtstamp_queue;
|
||||
};
|
||||
|
||||
struct sja1105_skb_cb {
|
||||
|
@ -69,42 +73,24 @@ struct sja1105_port {
|
|||
bool hwts_tx_en;
|
||||
};
|
||||
|
||||
enum sja1110_meta_tstamp {
|
||||
SJA1110_META_TSTAMP_TX = 0,
|
||||
SJA1110_META_TSTAMP_RX = 1,
|
||||
};
|
||||
/* Timestamps are in units of 8 ns clock ticks (equivalent to
|
||||
* a fixed 125 MHz clock).
|
||||
*/
|
||||
#define SJA1105_TICK_NS 8
|
||||
|
||||
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)
|
||||
|
||||
void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id,
|
||||
enum sja1110_meta_tstamp dir, u64 tstamp);
|
||||
|
||||
#else
|
||||
|
||||
static inline void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port,
|
||||
u8 ts_id, enum sja1110_meta_tstamp dir,
|
||||
u64 tstamp)
|
||||
static inline s64 ns_to_sja1105_ticks(s64 ns)
|
||||
{
|
||||
return ns / SJA1105_TICK_NS;
|
||||
}
|
||||
|
||||
#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) */
|
||||
|
||||
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105)
|
||||
|
||||
extern const struct dsa_switch_ops sja1105_switch_ops;
|
||||
static inline s64 sja1105_ticks_to_ns(s64 ticks)
|
||||
{
|
||||
return ticks * SJA1105_TICK_NS;
|
||||
}
|
||||
|
||||
static inline bool dsa_port_is_sja1105(struct dsa_port *dp)
|
||||
{
|
||||
return dp->ds->ops == &sja1105_switch_ops;
|
||||
return true;
|
||||
}
|
||||
|
||||
#else
|
||||
|
||||
static inline bool dsa_port_is_sja1105(struct dsa_port *dp)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* _NET_DSA_SJA1105_H */
|
||||
|
|
|
@ -9475,16 +9475,22 @@ struct mlx5_ifc_pcmr_reg_bits {
|
|||
u8 reserved_at_0[0x8];
|
||||
u8 local_port[0x8];
|
||||
u8 reserved_at_10[0x10];
|
||||
|
||||
u8 entropy_force_cap[0x1];
|
||||
u8 entropy_calc_cap[0x1];
|
||||
u8 entropy_gre_calc_cap[0x1];
|
||||
u8 reserved_at_23[0x1b];
|
||||
u8 reserved_at_23[0xf];
|
||||
u8 rx_ts_over_crc_cap[0x1];
|
||||
u8 reserved_at_33[0xb];
|
||||
u8 fcs_cap[0x1];
|
||||
u8 reserved_at_3f[0x1];
|
||||
|
||||
u8 entropy_force[0x1];
|
||||
u8 entropy_calc[0x1];
|
||||
u8 entropy_gre_calc[0x1];
|
||||
u8 reserved_at_43[0x1b];
|
||||
u8 reserved_at_43[0xf];
|
||||
u8 rx_ts_over_crc[0x1];
|
||||
u8 reserved_at_53[0xb];
|
||||
u8 fcs_chk[0x1];
|
||||
u8 reserved_at_5f[0x1];
|
||||
};
|
||||
|
|
|
@ -89,15 +89,6 @@
|
|||
/* Source PGIDs, one per physical port */
|
||||
#define PGID_SRC 80
|
||||
|
||||
#define IFH_TAG_TYPE_C 0
|
||||
#define IFH_TAG_TYPE_S 1
|
||||
|
||||
#define IFH_REW_OP_NOOP 0x0
|
||||
#define IFH_REW_OP_DSCP 0x1
|
||||
#define IFH_REW_OP_ONE_STEP_PTP 0x2
|
||||
#define IFH_REW_OP_TWO_STEP_PTP 0x3
|
||||
#define IFH_REW_OP_ORIGIN_PTP 0x5
|
||||
|
||||
#define OCELOT_NUM_TC 8
|
||||
|
||||
#define OCELOT_SPEED_2500 0
|
||||
|
@ -603,10 +594,10 @@ struct ocelot_port {
|
|||
/* The VLAN ID that will be transmitted as untagged, on egress */
|
||||
struct ocelot_vlan native_vlan;
|
||||
|
||||
unsigned int ptp_skbs_in_flight;
|
||||
u8 ptp_cmd;
|
||||
struct sk_buff_head tx_skbs;
|
||||
u8 ts_id;
|
||||
spinlock_t ts_id_lock;
|
||||
|
||||
phy_interface_t phy_mode;
|
||||
|
||||
|
@ -680,6 +671,9 @@ struct ocelot {
|
|||
struct ptp_clock *ptp_clock;
|
||||
struct ptp_clock_info ptp_info;
|
||||
struct hwtstamp_config hwtstamp_config;
|
||||
unsigned int ptp_skbs_in_flight;
|
||||
/* Protects the 2-step TX timestamp ID logic */
|
||||
spinlock_t ts_id_lock;
|
||||
/* Protects the PTP interface state */
|
||||
struct mutex ptp_lock;
|
||||
/* Protects the PTP clock */
|
||||
|
@ -692,15 +686,6 @@ struct ocelot_policer {
|
|||
u32 burst; /* bytes */
|
||||
};
|
||||
|
||||
struct ocelot_skb_cb {
|
||||
struct sk_buff *clone;
|
||||
u8 ptp_cmd;
|
||||
u8 ts_id;
|
||||
};
|
||||
|
||||
#define OCELOT_SKB_CB(skb) \
|
||||
((struct ocelot_skb_cb *)((skb)->cb))
|
||||
|
||||
#define ocelot_read_ix(ocelot, reg, gi, ri) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))
|
||||
#define ocelot_read_gix(ocelot, reg, gi) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi))
|
||||
#define ocelot_read_rix(ocelot, reg, ri) __ocelot_read_ix(ocelot, reg, reg##_RSZ * (ri))
|
||||
|
@ -752,8 +737,6 @@ u32 __ocelot_target_read_ix(struct ocelot *ocelot, enum ocelot_target target,
|
|||
void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target,
|
||||
u32 val, u32 reg, u32 offset);
|
||||
|
||||
#if IS_ENABLED(CONFIG_MSCC_OCELOT_SWITCH_LIB)
|
||||
|
||||
/* Packet I/O */
|
||||
bool ocelot_can_inject(struct ocelot *ocelot, int grp);
|
||||
void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
|
||||
|
@ -761,36 +744,6 @@ void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,
|
|||
int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);
|
||||
void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);
|
||||
|
||||
u32 ocelot_ptp_rew_op(struct sk_buff *skb);
|
||||
#else
|
||||
|
||||
static inline bool ocelot_can_inject(struct ocelot *ocelot, int grp)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void ocelot_port_inject_frame(struct ocelot *ocelot, int port,
|
||||
int grp, u32 rew_op,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp,
|
||||
struct sk_buff **skb)
|
||||
{
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
static inline void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)
|
||||
{
|
||||
}
|
||||
|
||||
static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Hardware initialization */
|
||||
int ocelot_regfields_init(struct ocelot *ocelot,
|
||||
const struct reg_field *const regfields);
|
||||
|
|
|
@ -13,6 +13,9 @@
|
|||
#include <linux/ptp_clock_kernel.h>
|
||||
#include <soc/mscc/ocelot.h>
|
||||
|
||||
#define OCELOT_MAX_PTP_ID 63
|
||||
#define OCELOT_PTP_FIFO_SIZE 128
|
||||
|
||||
#define PTP_PIN_CFG_RSZ 0x20
|
||||
#define PTP_PIN_TOD_SEC_MSB_RSZ PTP_PIN_CFG_RSZ
|
||||
#define PTP_PIN_TOD_SEC_LSB_RSZ PTP_PIN_CFG_RSZ
|
||||
|
|
|
@ -77,8 +77,8 @@ static void dev_seq_printf_stats(struct seq_file *seq, struct net_device *dev)
|
|||
struct rtnl_link_stats64 temp;
|
||||
const struct rtnl_link_stats64 *stats = dev_get_stats(dev, &temp);
|
||||
|
||||
seq_printf(seq, "%9s: %16llu %12llu %4llu %6llu %4llu %5llu %10llu %9llu "
|
||||
"%16llu %12llu %4llu %6llu %4llu %5llu %7llu %10llu\n",
|
||||
seq_printf(seq, "%6s: %7llu %7llu %4llu %4llu %4llu %5llu %10llu %9llu "
|
||||
"%8llu %7llu %4llu %4llu %4llu %5llu %7llu %10llu\n",
|
||||
dev->name, stats->rx_bytes, stats->rx_packets,
|
||||
stats->rx_errors,
|
||||
stats->rx_dropped + stats->rx_missed_errors,
|
||||
|
@ -103,11 +103,11 @@ static void dev_seq_printf_stats(struct seq_file *seq, struct net_device *dev)
|
|||
static int dev_seq_show(struct seq_file *seq, void *v)
|
||||
{
|
||||
if (v == SEQ_START_TOKEN)
|
||||
seq_puts(seq, "Interface| Receive "
|
||||
" | Transmit\n"
|
||||
" | bytes packets errs drop fifo frame "
|
||||
"compressed multicast| bytes packets errs "
|
||||
" drop fifo colls carrier compressed\n");
|
||||
seq_puts(seq, "Inter-| Receive "
|
||||
" | Transmit\n"
|
||||
" face |bytes packets errs drop fifo frame "
|
||||
"compressed multicast|bytes packets errs "
|
||||
"drop fifo colls carrier compressed\n");
|
||||
else
|
||||
dev_seq_printf_stats(seq, v);
|
||||
return 0;
|
||||
|
@ -259,14 +259,14 @@ static int ptype_seq_show(struct seq_file *seq, void *v)
|
|||
struct packet_type *pt = v;
|
||||
|
||||
if (v == SEQ_START_TOKEN)
|
||||
seq_puts(seq, "Type Device Function\n");
|
||||
seq_puts(seq, "Type Device Function\n");
|
||||
else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) {
|
||||
if (pt->type == htons(ETH_P_ALL))
|
||||
seq_puts(seq, "ALL ");
|
||||
else
|
||||
seq_printf(seq, "%04x", ntohs(pt->type));
|
||||
|
||||
seq_printf(seq, " %-9s %ps\n",
|
||||
seq_printf(seq, " %-8s %ps\n",
|
||||
pt->dev ? pt->dev->name : "", pt->func);
|
||||
}
|
||||
|
||||
|
@ -327,14 +327,12 @@ static int dev_mc_seq_show(struct seq_file *seq, void *v)
|
|||
struct netdev_hw_addr *ha;
|
||||
struct net_device *dev = v;
|
||||
|
||||
if (v == SEQ_START_TOKEN) {
|
||||
seq_puts(seq, "Ifindex Interface Refcount Global_use Address\n");
|
||||
if (v == SEQ_START_TOKEN)
|
||||
return 0;
|
||||
}
|
||||
|
||||
netif_addr_lock_bh(dev);
|
||||
netdev_for_each_mc_addr(ha, dev) {
|
||||
seq_printf(seq, "%-7d %-9s %-8d %-10d %*phN\n",
|
||||
seq_printf(seq, "%-4d %-15s %-5d %-5d %*phN\n",
|
||||
dev->ifindex, dev->name,
|
||||
ha->refcount, ha->global_use,
|
||||
(int)dev->addr_len, ha->addr);
|
||||
|
|
|
@ -101,8 +101,6 @@ config NET_DSA_TAG_RTL4_A
|
|||
|
||||
config NET_DSA_TAG_OCELOT
|
||||
tristate "Tag driver for Ocelot family of switches, using NPI port"
|
||||
depends on MSCC_OCELOT_SWITCH_LIB || \
|
||||
(MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
|
||||
select PACKING
|
||||
help
|
||||
Say Y or M if you want to enable NPI tagging for the Ocelot switches
|
||||
|
@ -114,8 +112,6 @@ config NET_DSA_TAG_OCELOT
|
|||
|
||||
config NET_DSA_TAG_OCELOT_8021Q
|
||||
tristate "Tag driver for Ocelot family of switches, using VLAN"
|
||||
depends on MSCC_OCELOT_SWITCH_LIB || \
|
||||
(MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
|
||||
help
|
||||
Say Y or M if you want to enable support for tagging frames with a
|
||||
custom VLAN-based header. Frames that require timestamping, such as
|
||||
|
@ -138,7 +134,6 @@ config NET_DSA_TAG_LAN9303
|
|||
|
||||
config NET_DSA_TAG_SJA1105
|
||||
tristate "Tag driver for NXP SJA1105 switches"
|
||||
depends on NET_DSA_SJA1105 || !NET_DSA_SJA1105
|
||||
select PACKING
|
||||
help
|
||||
Say Y or M if you want to enable support for tagging frames with the
|
||||
|
|
|
@ -170,7 +170,7 @@ void dsa_bridge_num_put(const struct net_device *bridge_dev, int bridge_num)
|
|||
/* Check if the bridge is still in use, otherwise it is time
|
||||
* to clean it up so we can reuse this bridge_num later.
|
||||
*/
|
||||
if (!dsa_bridge_num_find(bridge_dev))
|
||||
if (dsa_bridge_num_find(bridge_dev) < 0)
|
||||
clear_bit(bridge_num, &dsa_fwd_offloading_bridges);
|
||||
}
|
||||
|
||||
|
@ -811,7 +811,9 @@ static int dsa_switch_setup_tag_protocol(struct dsa_switch *ds)
|
|||
if (!dsa_is_cpu_port(ds, port))
|
||||
continue;
|
||||
|
||||
rtnl_lock();
|
||||
err = ds->ops->change_tag_protocol(ds, port, tag_ops->proto);
|
||||
rtnl_unlock();
|
||||
if (err) {
|
||||
dev_err(ds->dev, "Unable to use tag protocol \"%s\": %pe\n",
|
||||
tag_ops->name, ERR_PTR(err));
|
||||
|
|
|
@ -168,7 +168,7 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
|
|||
if (extack._msg)
|
||||
dev_err(ds->dev, "port %d: %s\n", info->port,
|
||||
extack._msg);
|
||||
if (err && err != EOPNOTSUPP)
|
||||
if (err && err != -EOPNOTSUPP)
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -45,6 +45,7 @@
|
|||
* 6 6 2 2 4 2 N
|
||||
*/
|
||||
|
||||
#include <linux/dsa/mv88e6xxx.h>
|
||||
#include <linux/etherdevice.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -129,12 +130,9 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
|
|||
u8 tag_dev, tag_port;
|
||||
enum dsa_cmd cmd;
|
||||
u8 *dsa_header;
|
||||
u16 pvid = 0;
|
||||
int err;
|
||||
|
||||
if (skb->offload_fwd_mark) {
|
||||
struct dsa_switch_tree *dst = dp->ds->dst;
|
||||
struct net_device *br = dp->bridge_dev;
|
||||
|
||||
cmd = DSA_CMD_FORWARD;
|
||||
|
||||
|
@ -144,19 +142,6 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
|
|||
*/
|
||||
tag_dev = dst->last_switch + 1 + dp->bridge_num;
|
||||
tag_port = 0;
|
||||
|
||||
/* If we are offloading forwarding for a VLAN-unaware bridge,
|
||||
* inject packets to hardware using the bridge's pvid, since
|
||||
* that's where the packets ingressed from.
|
||||
*/
|
||||
if (!br_vlan_enabled(br)) {
|
||||
/* Safe because __dev_queue_xmit() runs under
|
||||
* rcu_read_lock_bh()
|
||||
*/
|
||||
err = br_vlan_get_pvid_rcu(br, &pvid);
|
||||
if (err)
|
||||
return NULL;
|
||||
}
|
||||
} else {
|
||||
cmd = DSA_CMD_FROM_CPU;
|
||||
tag_dev = dp->ds->index;
|
||||
|
@ -180,16 +165,21 @@ static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
|
|||
dsa_header[2] &= ~0x10;
|
||||
}
|
||||
} else {
|
||||
struct net_device *br = dp->bridge_dev;
|
||||
u16 vid;
|
||||
|
||||
vid = br ? MV88E6XXX_VID_BRIDGED : MV88E6XXX_VID_STANDALONE;
|
||||
|
||||
skb_push(skb, DSA_HLEN + extra);
|
||||
dsa_alloc_etype_header(skb, DSA_HLEN + extra);
|
||||
|
||||
/* Construct untagged DSA tag. */
|
||||
/* Construct DSA header from untagged frame. */
|
||||
dsa_header = dsa_etype_header_pos_tx(skb) + extra;
|
||||
|
||||
dsa_header[0] = (cmd << 6) | tag_dev;
|
||||
dsa_header[1] = tag_port << 3;
|
||||
dsa_header[2] = pvid >> 8;
|
||||
dsa_header[3] = pvid & 0xff;
|
||||
dsa_header[2] = vid >> 8;
|
||||
dsa_header[3] = vid & 0xff;
|
||||
}
|
||||
|
||||
return skb;
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
/* Copyright 2019 NXP
|
||||
*/
|
||||
#include <linux/dsa/ocelot.h>
|
||||
#include <soc/mscc/ocelot.h>
|
||||
#include "dsa_priv.h"
|
||||
|
||||
static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
|
||||
|
|
|
@ -9,10 +9,32 @@
|
|||
* that on egress
|
||||
*/
|
||||
#include <linux/dsa/8021q.h>
|
||||
#include <soc/mscc/ocelot.h>
|
||||
#include <soc/mscc/ocelot_ptp.h>
|
||||
#include <linux/dsa/ocelot.h>
|
||||
#include "dsa_priv.h"
|
||||
|
||||
static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
struct felix_deferred_xmit_work *xmit_work;
|
||||
struct felix_port *felix_port = dp->priv;
|
||||
|
||||
xmit_work = kzalloc(sizeof(*xmit_work), GFP_ATOMIC);
|
||||
if (!xmit_work)
|
||||
return NULL;
|
||||
|
||||
/* Calls felix_port_deferred_xmit in felix.c */
|
||||
kthread_init_work(&xmit_work->work, felix_port->xmit_work_fn);
|
||||
/* Increase refcount so the kfree_skb in dsa_slave_xmit
|
||||
* won't really free the packet.
|
||||
*/
|
||||
xmit_work->dp = dp;
|
||||
xmit_work->skb = skb_get(skb);
|
||||
|
||||
kthread_queue_work(felix_port->xmit_worker, &xmit_work->work);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
|
||||
struct net_device *netdev)
|
||||
{
|
||||
|
@ -20,18 +42,10 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
|
|||
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
|
||||
u16 queue_mapping = skb_get_queue_mapping(skb);
|
||||
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
|
||||
struct ocelot *ocelot = dp->ds->priv;
|
||||
int port = dp->index;
|
||||
u32 rew_op = 0;
|
||||
struct ethhdr *hdr = eth_hdr(skb);
|
||||
|
||||
rew_op = ocelot_ptp_rew_op(skb);
|
||||
if (rew_op) {
|
||||
if (!ocelot_can_inject(ocelot, 0))
|
||||
return NULL;
|
||||
|
||||
ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
|
||||
return NULL;
|
||||
}
|
||||
if (ocelot_ptp_rew_op(skb) || is_link_local_ether_addr(hdr->h_dest))
|
||||
return ocelot_defer_xmit(dp, skb);
|
||||
|
||||
return dsa_8021q_xmit(skb, netdev, ETH_P_8021Q,
|
||||
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#include <linux/if_vlan.h>
|
||||
#include <linux/dsa/sja1105.h>
|
||||
#include <linux/dsa/8021q.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/packing.h>
|
||||
#include "dsa_priv.h"
|
||||
|
||||
|
@ -53,6 +54,11 @@
|
|||
#define SJA1110_TX_TRAILER_LEN 4
|
||||
#define SJA1110_MAX_PADDING_LEN 15
|
||||
|
||||
enum sja1110_meta_tstamp {
|
||||
SJA1110_META_TSTAMP_TX = 0,
|
||||
SJA1110_META_TSTAMP_RX = 1,
|
||||
};
|
||||
|
||||
/* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */
|
||||
static inline bool sja1105_is_link_local(const struct sk_buff *skb)
|
||||
{
|
||||
|
@ -520,6 +526,43 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
|
|||
is_meta);
|
||||
}
|
||||
|
||||
static void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port,
|
||||
u8 ts_id, enum sja1110_meta_tstamp dir,
|
||||
u64 tstamp)
|
||||
{
|
||||
struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
|
||||
struct dsa_port *dp = dsa_to_port(ds, port);
|
||||
struct skb_shared_hwtstamps shwt = {0};
|
||||
struct sja1105_port *sp = dp->priv;
|
||||
|
||||
if (!dsa_port_is_sja1105(dp))
|
||||
return;
|
||||
|
||||
/* We don't care about RX timestamps on the CPU port */
|
||||
if (dir == SJA1110_META_TSTAMP_RX)
|
||||
return;
|
||||
|
||||
spin_lock(&sp->data->skb_txtstamp_queue.lock);
|
||||
|
||||
skb_queue_walk_safe(&sp->data->skb_txtstamp_queue, skb, skb_tmp) {
|
||||
if (SJA1105_SKB_CB(skb)->ts_id != ts_id)
|
||||
continue;
|
||||
|
||||
__skb_unlink(skb, &sp->data->skb_txtstamp_queue);
|
||||
skb_match = skb;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock(&sp->data->skb_txtstamp_queue.lock);
|
||||
|
||||
if (WARN_ON(!skb_match))
|
||||
return;
|
||||
|
||||
shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));
|
||||
skb_complete_tx_timestamp(skb_match, &shwt);
|
||||
}
|
||||
|
||||
static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
|
||||
{
|
||||
u8 *buf = dsa_etype_header_pos_rx(skb) + SJA1110_HEADER_LEN;
|
||||
|
|
|
@ -1054,14 +1054,19 @@ bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr)
|
|||
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr), &_iio);
|
||||
if (!ext_hdr || !iio)
|
||||
goto send_mal_query;
|
||||
if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr))
|
||||
if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr) ||
|
||||
ntohs(iio->extobj_hdr.length) > sizeof(_iio))
|
||||
goto send_mal_query;
|
||||
ident_len = ntohs(iio->extobj_hdr.length) - sizeof(iio->extobj_hdr);
|
||||
iio = skb_header_pointer(skb, sizeof(_ext_hdr),
|
||||
sizeof(iio->extobj_hdr) + ident_len, &_iio);
|
||||
if (!iio)
|
||||
goto send_mal_query;
|
||||
|
||||
status = 0;
|
||||
dev = NULL;
|
||||
switch (iio->extobj_hdr.class_type) {
|
||||
case ICMP_EXT_ECHO_CTYPE_NAME:
|
||||
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);
|
||||
if (ident_len >= IFNAMSIZ)
|
||||
goto send_mal_query;
|
||||
memset(buff, 0, sizeof(buff));
|
||||
|
@ -1069,30 +1074,24 @@ bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr)
|
|||
dev = dev_get_by_name(net, buff);
|
||||
break;
|
||||
case ICMP_EXT_ECHO_CTYPE_INDEX:
|
||||
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +
|
||||
sizeof(iio->ident.ifindex), &_iio);
|
||||
if (ident_len != sizeof(iio->ident.ifindex))
|
||||
goto send_mal_query;
|
||||
dev = dev_get_by_index(net, ntohl(iio->ident.ifindex));
|
||||
break;
|
||||
case ICMP_EXT_ECHO_CTYPE_ADDR:
|
||||
if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
|
||||
if (ident_len < sizeof(iio->ident.addr.ctype3_hdr) ||
|
||||
ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
|
||||
iio->ident.addr.ctype3_hdr.addrlen)
|
||||
goto send_mal_query;
|
||||
switch (ntohs(iio->ident.addr.ctype3_hdr.afi)) {
|
||||
case ICMP_AFI_IP:
|
||||
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +
|
||||
sizeof(struct in_addr), &_iio);
|
||||
if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
|
||||
sizeof(struct in_addr))
|
||||
if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in_addr))
|
||||
goto send_mal_query;
|
||||
dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr);
|
||||
break;
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
case ICMP_AFI_IP6:
|
||||
iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);
|
||||
if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +
|
||||
sizeof(struct in6_addr))
|
||||
if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in6_addr))
|
||||
goto send_mal_query;
|
||||
dev = ipv6_stub->ipv6_dev_find(net, &iio->ident.addr.ip_addr.ipv6_addr, dev);
|
||||
dev_hold(dev);
|
||||
|
|
|
@ -770,6 +770,66 @@ static void __ioam6_fill_trace_data(struct sk_buff *skb,
|
|||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit12 undefined: filled with empty value */
|
||||
if (trace->type.bit12) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit13 undefined: filled with empty value */
|
||||
if (trace->type.bit13) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit14 undefined: filled with empty value */
|
||||
if (trace->type.bit14) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit15 undefined: filled with empty value */
|
||||
if (trace->type.bit15) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit16 undefined: filled with empty value */
|
||||
if (trace->type.bit16) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit17 undefined: filled with empty value */
|
||||
if (trace->type.bit17) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit18 undefined: filled with empty value */
|
||||
if (trace->type.bit18) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit19 undefined: filled with empty value */
|
||||
if (trace->type.bit19) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit20 undefined: filled with empty value */
|
||||
if (trace->type.bit20) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* bit21 undefined: filled with empty value */
|
||||
if (trace->type.bit21) {
|
||||
*(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);
|
||||
data += sizeof(__be32);
|
||||
}
|
||||
|
||||
/* opaque state snapshot */
|
||||
if (trace->type.bit22) {
|
||||
if (!sc) {
|
||||
|
@ -791,16 +851,10 @@ void ioam6_fill_trace_data(struct sk_buff *skb,
|
|||
struct ioam6_schema *sc;
|
||||
u8 sclen = 0;
|
||||
|
||||
/* Skip if Overflow flag is set OR
|
||||
* if an unknown type (bit 12-21) is set
|
||||
/* Skip if Overflow flag is set
|
||||
*/
|
||||
if (trace->overflow ||
|
||||
trace->type.bit12 | trace->type.bit13 | trace->type.bit14 |
|
||||
trace->type.bit15 | trace->type.bit16 | trace->type.bit17 |
|
||||
trace->type.bit18 | trace->type.bit19 | trace->type.bit20 |
|
||||
trace->type.bit21) {
|
||||
if (trace->overflow)
|
||||
return;
|
||||
}
|
||||
|
||||
/* NodeLen does not include Opaque State Snapshot length. We need to
|
||||
* take it into account if the corresponding bit is set (bit 22) and
|
||||
|
|
|
@ -75,7 +75,11 @@ static bool ioam6_validate_trace_hdr(struct ioam6_trace_hdr *trace)
|
|||
u32 fields;
|
||||
|
||||
if (!trace->type_be32 || !trace->remlen ||
|
||||
trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4)
|
||||
trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4 ||
|
||||
trace->type.bit12 | trace->type.bit13 | trace->type.bit14 |
|
||||
trace->type.bit15 | trace->type.bit16 | trace->type.bit17 |
|
||||
trace->type.bit18 | trace->type.bit19 | trace->type.bit20 |
|
||||
trace->type.bit21)
|
||||
return false;
|
||||
|
||||
trace->nodelen = 0;
|
||||
|
|
|
@ -528,7 +528,6 @@ static bool mptcp_check_data_fin(struct sock *sk)
|
|||
|
||||
sk->sk_shutdown |= RCV_SHUTDOWN;
|
||||
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
|
||||
set_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
|
||||
switch (sk->sk_state) {
|
||||
case TCP_ESTABLISHED:
|
||||
|
@ -742,10 +741,9 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
|
|||
|
||||
/* Wake-up the reader only for in-sequence data */
|
||||
mptcp_data_lock(sk);
|
||||
if (move_skbs_to_msk(msk, ssk)) {
|
||||
set_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
if (move_skbs_to_msk(msk, ssk))
|
||||
sk->sk_data_ready(sk);
|
||||
}
|
||||
|
||||
mptcp_data_unlock(sk);
|
||||
}
|
||||
|
||||
|
@ -847,7 +845,6 @@ static void mptcp_check_for_eof(struct mptcp_sock *msk)
|
|||
sk->sk_shutdown |= RCV_SHUTDOWN;
|
||||
|
||||
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
|
||||
set_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
sk->sk_data_ready(sk);
|
||||
}
|
||||
|
||||
|
@ -1759,21 +1756,6 @@ out:
|
|||
return copied ? : ret;
|
||||
}
|
||||
|
||||
static void mptcp_wait_data(struct sock *sk, long *timeo)
|
||||
{
|
||||
DEFINE_WAIT_FUNC(wait, woken_wake_function);
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
add_wait_queue(sk_sleep(sk), &wait);
|
||||
sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
|
||||
|
||||
sk_wait_event(sk, timeo,
|
||||
test_bit(MPTCP_DATA_READY, &msk->flags), &wait);
|
||||
|
||||
sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
|
||||
remove_wait_queue(sk_sleep(sk), &wait);
|
||||
}
|
||||
|
||||
static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
|
||||
struct msghdr *msg,
|
||||
size_t len, int flags,
|
||||
|
@ -2077,19 +2059,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
|||
}
|
||||
|
||||
pr_debug("block timeout %ld", timeo);
|
||||
mptcp_wait_data(sk, &timeo);
|
||||
}
|
||||
|
||||
if (skb_queue_empty_lockless(&sk->sk_receive_queue) &&
|
||||
skb_queue_empty(&msk->receive_queue)) {
|
||||
/* entire backlog drained, clear DATA_READY. */
|
||||
clear_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
|
||||
/* .. race-breaker: ssk might have gotten new data
|
||||
* after last __mptcp_move_skbs() returned false.
|
||||
*/
|
||||
if (unlikely(__mptcp_move_skbs(msk)))
|
||||
set_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
sk_wait_data(sk, &timeo, NULL);
|
||||
}
|
||||
|
||||
out_err:
|
||||
|
@ -2098,9 +2068,9 @@ out_err:
|
|||
tcp_recv_timestamp(msg, sk, &tss);
|
||||
}
|
||||
|
||||
pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d",
|
||||
msk, test_bit(MPTCP_DATA_READY, &msk->flags),
|
||||
skb_queue_empty_lockless(&sk->sk_receive_queue), copied);
|
||||
pr_debug("msk=%p rx queue empty=%d:%d copied=%d",
|
||||
msk, skb_queue_empty_lockless(&sk->sk_receive_queue),
|
||||
skb_queue_empty(&msk->receive_queue), copied);
|
||||
if (!(flags & MSG_PEEK))
|
||||
mptcp_rcv_space_adjust(msk, copied);
|
||||
|
||||
|
@ -2368,7 +2338,6 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
|
|||
inet_sk_state_store(sk, TCP_CLOSE);
|
||||
sk->sk_shutdown = SHUTDOWN_MASK;
|
||||
smp_mb__before_atomic(); /* SHUTDOWN must be visible first */
|
||||
set_bit(MPTCP_DATA_READY, &msk->flags);
|
||||
set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);
|
||||
|
||||
mptcp_close_wake_up(sk);
|
||||
|
@ -3385,8 +3354,14 @@ unlock_fail:
|
|||
|
||||
static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
|
||||
{
|
||||
return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM :
|
||||
0;
|
||||
/* Concurrent splices from sk_receive_queue into receive_queue will
|
||||
* always show at least one non-empty queue when checked in this order.
|
||||
*/
|
||||
if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
|
||||
skb_queue_empty_lockless(&msk->receive_queue))
|
||||
return 0;
|
||||
|
||||
return EPOLLIN | EPOLLRDNORM;
|
||||
}
|
||||
|
||||
static __poll_t mptcp_check_writeable(struct mptcp_sock *msk)
|
||||
|
@ -3421,7 +3396,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
|
|||
state = inet_sk_state_load(sk);
|
||||
pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);
|
||||
if (state == TCP_LISTEN)
|
||||
return mptcp_check_readable(msk);
|
||||
return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM : 0;
|
||||
|
||||
if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {
|
||||
mask |= mptcp_check_readable(msk);
|
||||
|
|
|
@ -60,6 +60,9 @@ int nfc_proto_register(const struct nfc_protocol *nfc_proto)
|
|||
proto_tab[nfc_proto->id] = nfc_proto;
|
||||
write_unlock(&proto_tab_lock);
|
||||
|
||||
if (rc)
|
||||
proto_unregister(nfc_proto->proto);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL(nfc_proto_register);
|
||||
|
|
|
@ -277,6 +277,7 @@ int digital_tg_configure_hw(struct nfc_digital_dev *ddev, int type, int param)
|
|||
static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
|
||||
{
|
||||
struct digital_tg_mdaa_params *params;
|
||||
int rc;
|
||||
|
||||
params = kzalloc(sizeof(*params), GFP_KERNEL);
|
||||
if (!params)
|
||||
|
@ -291,8 +292,12 @@ static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech)
|
|||
get_random_bytes(params->nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);
|
||||
params->sc = DIGITAL_SENSF_FELICA_SC;
|
||||
|
||||
return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
|
||||
500, digital_tg_recv_atr_req, NULL);
|
||||
rc = digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params,
|
||||
500, digital_tg_recv_atr_req, NULL);
|
||||
if (rc)
|
||||
kfree(params);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech)
|
||||
|
|
|
@ -465,8 +465,12 @@ static int digital_in_send_sdd_req(struct nfc_digital_dev *ddev,
|
|||
skb_put_u8(skb, sel_cmd);
|
||||
skb_put_u8(skb, DIGITAL_SDD_REQ_SEL_PAR);
|
||||
|
||||
return digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
|
||||
target);
|
||||
rc = digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res,
|
||||
target);
|
||||
if (rc)
|
||||
kfree_skb(skb);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void digital_in_recv_sens_res(struct nfc_digital_dev *ddev, void *arg,
|
||||
|
|
|
@ -334,6 +334,8 @@ static void nci_core_conn_close_rsp_packet(struct nci_dev *ndev,
|
|||
ndev->cur_conn_id);
|
||||
if (conn_info) {
|
||||
list_del(&conn_info->list);
|
||||
if (conn_info == ndev->rf_conn_info)
|
||||
ndev->rf_conn_info = NULL;
|
||||
devm_kfree(&ndev->nfc_dev->dev, conn_info);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -529,22 +529,28 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
|
|||
for (i = tc.offset; i < tc.offset + tc.count; i++) {
|
||||
struct netdev_queue *q = netdev_get_tx_queue(dev, i);
|
||||
struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
|
||||
struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
|
||||
struct gnet_stats_queue __percpu *cpu_qstats = NULL;
|
||||
|
||||
spin_lock_bh(qdisc_lock(qdisc));
|
||||
if (qdisc_is_percpu_stats(qdisc)) {
|
||||
cpu_bstats = qdisc->cpu_bstats;
|
||||
cpu_qstats = qdisc->cpu_qstats;
|
||||
}
|
||||
|
||||
qlen = qdisc_qlen_sum(qdisc);
|
||||
__gnet_stats_copy_basic(NULL, &sch->bstats,
|
||||
cpu_bstats, &qdisc->bstats);
|
||||
__gnet_stats_copy_queue(&sch->qstats,
|
||||
cpu_qstats,
|
||||
&qdisc->qstats,
|
||||
qlen);
|
||||
if (qdisc_is_percpu_stats(qdisc)) {
|
||||
qlen = qdisc_qlen_sum(qdisc);
|
||||
|
||||
__gnet_stats_copy_basic(NULL, &bstats,
|
||||
qdisc->cpu_bstats,
|
||||
&qdisc->bstats);
|
||||
__gnet_stats_copy_queue(&qstats,
|
||||
qdisc->cpu_qstats,
|
||||
&qdisc->qstats,
|
||||
qlen);
|
||||
} else {
|
||||
qlen += qdisc->q.qlen;
|
||||
bstats.bytes += qdisc->bstats.bytes;
|
||||
bstats.packets += qdisc->bstats.packets;
|
||||
qstats.backlog += qdisc->qstats.backlog;
|
||||
qstats.drops += qdisc->qstats.drops;
|
||||
qstats.requeues += qdisc->qstats.requeues;
|
||||
qstats.overlimits += qdisc->qstats.overlimits;
|
||||
}
|
||||
spin_unlock_bh(qdisc_lock(qdisc));
|
||||
}
|
||||
|
||||
|
|
|
@ -3697,7 +3697,7 @@ struct sctp_chunk *sctp_make_strreset_req(
|
|||
outlen = (sizeof(outreq) + stream_len) * out;
|
||||
inlen = (sizeof(inreq) + stream_len) * in;
|
||||
|
||||
retval = sctp_make_reconf(asoc, outlen + inlen);
|
||||
retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen));
|
||||
if (!retval)
|
||||
return NULL;
|
||||
|
||||
|
|
|
@ -150,9 +150,11 @@ static int smcr_cdc_get_slot_and_msg_send(struct smc_connection *conn)
|
|||
|
||||
again:
|
||||
link = conn->lnk;
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_cdc_get_free_slot(conn, link, &wr_buf, NULL, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
|
||||
spin_lock_bh(&conn->send_lock);
|
||||
if (link != conn->lnk) {
|
||||
|
@ -160,6 +162,7 @@ again:
|
|||
spin_unlock_bh(&conn->send_lock);
|
||||
smc_wr_tx_put_slot(link,
|
||||
(struct smc_wr_tx_pend_priv *)pend);
|
||||
smc_wr_tx_link_put(link);
|
||||
if (again)
|
||||
return -ENOLINK;
|
||||
again = true;
|
||||
|
@ -167,6 +170,8 @@ again:
|
|||
}
|
||||
rc = smc_cdc_msg_send(conn, wr_buf, pend);
|
||||
spin_unlock_bh(&conn->send_lock);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
|
|
@ -949,7 +949,7 @@ struct smc_link *smc_switch_conns(struct smc_link_group *lgr,
|
|||
to_lnk = &lgr->lnk[i];
|
||||
break;
|
||||
}
|
||||
if (!to_lnk) {
|
||||
if (!to_lnk || !smc_wr_tx_link_hold(to_lnk)) {
|
||||
smc_lgr_terminate_sched(lgr);
|
||||
return NULL;
|
||||
}
|
||||
|
@ -981,24 +981,26 @@ again:
|
|||
read_unlock_bh(&lgr->conns_lock);
|
||||
/* pre-fetch buffer outside of send_lock, might sleep */
|
||||
rc = smc_cdc_get_free_slot(conn, to_lnk, &wr_buf, NULL, &pend);
|
||||
if (rc) {
|
||||
smcr_link_down_cond_sched(to_lnk);
|
||||
return NULL;
|
||||
}
|
||||
if (rc)
|
||||
goto err_out;
|
||||
/* avoid race with smcr_tx_sndbuf_nonempty() */
|
||||
spin_lock_bh(&conn->send_lock);
|
||||
smc_switch_link_and_count(conn, to_lnk);
|
||||
rc = smc_switch_cursor(smc, pend, wr_buf);
|
||||
spin_unlock_bh(&conn->send_lock);
|
||||
sock_put(&smc->sk);
|
||||
if (rc) {
|
||||
smcr_link_down_cond_sched(to_lnk);
|
||||
return NULL;
|
||||
}
|
||||
if (rc)
|
||||
goto err_out;
|
||||
goto again;
|
||||
}
|
||||
read_unlock_bh(&lgr->conns_lock);
|
||||
smc_wr_tx_link_put(to_lnk);
|
||||
return to_lnk;
|
||||
|
||||
err_out:
|
||||
smcr_link_down_cond_sched(to_lnk);
|
||||
smc_wr_tx_link_put(to_lnk);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void smcr_buf_unuse(struct smc_buf_desc *rmb_desc,
|
||||
|
|
|
@ -383,9 +383,11 @@ int smc_llc_send_confirm_link(struct smc_link *link,
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
confllc = (struct smc_llc_msg_confirm_link *)wr_buf;
|
||||
memset(confllc, 0, sizeof(*confllc));
|
||||
confllc->hd.common.type = SMC_LLC_CONFIRM_LINK;
|
||||
|
@ -402,6 +404,8 @@ int smc_llc_send_confirm_link(struct smc_link *link,
|
|||
confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS;
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -415,9 +419,11 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
|
|||
struct smc_link *link;
|
||||
int i, rc, rtok_ix;
|
||||
|
||||
if (!smc_wr_tx_link_hold(send_link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(send_link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
rkeyllc = (struct smc_llc_msg_confirm_rkey *)wr_buf;
|
||||
memset(rkeyllc, 0, sizeof(*rkeyllc));
|
||||
rkeyllc->hd.common.type = SMC_LLC_CONFIRM_RKEY;
|
||||
|
@ -444,6 +450,8 @@ static int smc_llc_send_confirm_rkey(struct smc_link *send_link,
|
|||
(u64)sg_dma_address(rmb_desc->sgt[send_link->link_idx].sgl));
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(send_link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(send_link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -456,9 +464,11 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
rkeyllc = (struct smc_llc_msg_delete_rkey *)wr_buf;
|
||||
memset(rkeyllc, 0, sizeof(*rkeyllc));
|
||||
rkeyllc->hd.common.type = SMC_LLC_DELETE_RKEY;
|
||||
|
@ -467,6 +477,8 @@ static int smc_llc_send_delete_rkey(struct smc_link *link,
|
|||
rkeyllc->rkey[0] = htonl(rmb_desc->mr_rx[link->link_idx]->rkey);
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -480,9 +492,11 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
addllc = (struct smc_llc_msg_add_link *)wr_buf;
|
||||
|
||||
memset(addllc, 0, sizeof(*addllc));
|
||||
|
@ -504,6 +518,8 @@ int smc_llc_send_add_link(struct smc_link *link, u8 mac[], u8 gid[],
|
|||
}
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -517,9 +533,11 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
delllc = (struct smc_llc_msg_del_link *)wr_buf;
|
||||
|
||||
memset(delllc, 0, sizeof(*delllc));
|
||||
|
@ -536,6 +554,8 @@ int smc_llc_send_delete_link(struct smc_link *link, u8 link_del_id,
|
|||
delllc->reason = htonl(reason);
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -547,9 +567,11 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
testllc = (struct smc_llc_msg_test_link *)wr_buf;
|
||||
memset(testllc, 0, sizeof(*testllc));
|
||||
testllc->hd.common.type = SMC_LLC_TEST_LINK;
|
||||
|
@ -557,6 +579,8 @@ static int smc_llc_send_test_link(struct smc_link *link, u8 user_data[16])
|
|||
memcpy(testllc->user_data, user_data, sizeof(testllc->user_data));
|
||||
/* send llc message */
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -567,13 +591,16 @@ static int smc_llc_send_message(struct smc_link *link, void *llcbuf)
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_link_usable(link))
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
|
||||
return smc_wr_tx_send(link, pend);
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* schedule an llc send on link, may wait for buffers,
|
||||
|
@ -586,13 +613,16 @@ static int smc_llc_send_message_wait(struct smc_link *link, void *llcbuf)
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!smc_link_usable(link))
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg));
|
||||
return smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
|
||||
rc = smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
/********************************* receive ***********************************/
|
||||
|
@ -672,9 +702,11 @@ static int smc_llc_add_link_cont(struct smc_link *link,
|
|||
struct smc_buf_desc *rmb;
|
||||
u8 n;
|
||||
|
||||
if (!smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_llc_add_pending_send(link, &wr_buf, &pend);
|
||||
if (rc)
|
||||
return rc;
|
||||
goto put_out;
|
||||
addc_llc = (struct smc_llc_msg_add_link_cont *)wr_buf;
|
||||
memset(addc_llc, 0, sizeof(*addc_llc));
|
||||
|
||||
|
@ -706,7 +738,10 @@ static int smc_llc_add_link_cont(struct smc_link *link,
|
|||
addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont);
|
||||
if (lgr->role == SMC_CLNT)
|
||||
addc_llc->hd.flags |= SMC_LLC_FLAG_RESP;
|
||||
return smc_wr_tx_send(link, pend);
|
||||
rc = smc_wr_tx_send(link, pend);
|
||||
put_out:
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int smc_llc_cli_rkey_exchange(struct smc_link *link,
|
||||
|
|
|
@ -496,7 +496,7 @@ static int smc_tx_rdma_writes(struct smc_connection *conn,
|
|||
/* Wakeup sndbuf consumers from any context (IRQ or process)
|
||||
* since there is more data to transmit; usable snd_wnd as max transmit
|
||||
*/
|
||||
static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
|
||||
static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
|
||||
{
|
||||
struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags;
|
||||
struct smc_link *link = conn->lnk;
|
||||
|
@ -505,8 +505,11 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
|
|||
struct smc_wr_buf *wr_buf;
|
||||
int rc;
|
||||
|
||||
if (!link || !smc_wr_tx_link_hold(link))
|
||||
return -ENOLINK;
|
||||
rc = smc_cdc_get_free_slot(conn, link, &wr_buf, &wr_rdma_buf, &pend);
|
||||
if (rc < 0) {
|
||||
smc_wr_tx_link_put(link);
|
||||
if (rc == -EBUSY) {
|
||||
struct smc_sock *smc =
|
||||
container_of(conn, struct smc_sock, conn);
|
||||
|
@ -547,22 +550,7 @@ static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
|
|||
|
||||
out_unlock:
|
||||
spin_unlock_bh(&conn->send_lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn)
|
||||
{
|
||||
struct smc_link *link = conn->lnk;
|
||||
int rc = -ENOLINK;
|
||||
|
||||
if (!link)
|
||||
return rc;
|
||||
|
||||
atomic_inc(&link->wr_tx_refcnt);
|
||||
if (smc_link_usable(link))
|
||||
rc = _smcr_tx_sndbuf_nonempty(conn);
|
||||
if (atomic_dec_and_test(&link->wr_tx_refcnt))
|
||||
wake_up_all(&link->wr_tx_wait);
|
||||
smc_wr_tx_link_put(link);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
|
|
@ -60,6 +60,20 @@ static inline void smc_wr_tx_set_wr_id(atomic_long_t *wr_tx_id, long val)
|
|||
atomic_long_set(wr_tx_id, val);
|
||||
}
|
||||
|
||||
static inline bool smc_wr_tx_link_hold(struct smc_link *link)
|
||||
{
|
||||
if (!smc_link_usable(link))
|
||||
return false;
|
||||
atomic_inc(&link->wr_tx_refcnt);
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline void smc_wr_tx_link_put(struct smc_link *link)
|
||||
{
|
||||
if (atomic_dec_and_test(&link->wr_tx_refcnt))
|
||||
wake_up_all(&link->wr_tx_wait);
|
||||
}
|
||||
|
||||
static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk)
|
||||
{
|
||||
wake_up_all(&lnk->wr_tx_wait);
|
||||
|
|
|
@ -828,7 +828,7 @@ static void unix_unhash(struct sock *sk)
|
|||
}
|
||||
|
||||
struct proto unix_dgram_proto = {
|
||||
.name = "UNIX-DGRAM",
|
||||
.name = "UNIX",
|
||||
.owner = THIS_MODULE,
|
||||
.obj_size = sizeof(struct unix_sock),
|
||||
.close = unix_close,
|
||||
|
|
|
@ -468,10 +468,26 @@ out_bits()
|
|||
for i in {0..22}
|
||||
do
|
||||
ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \
|
||||
prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0
|
||||
prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} \
|
||||
dev veth0 &>/dev/null
|
||||
|
||||
run_test "out_bit$i" "${desc/<n>/$i}" ioam-node-alpha ioam-node-beta \
|
||||
db01::2 db01::1 veth0 ${bit2type[$i]} 123
|
||||
local cmd_res=$?
|
||||
local descr="${desc/<n>/$i}"
|
||||
|
||||
if [[ $i -ge 12 && $i -le 21 ]]
|
||||
then
|
||||
if [ $cmd_res != 0 ]
|
||||
then
|
||||
npassed=$((npassed+1))
|
||||
log_test_passed "$descr"
|
||||
else
|
||||
nfailed=$((nfailed+1))
|
||||
log_test_failed "$descr"
|
||||
fi
|
||||
else
|
||||
run_test "out_bit$i" "$descr" ioam-node-alpha ioam-node-beta \
|
||||
db01::2 db01::1 veth0 ${bit2type[$i]} 123
|
||||
fi
|
||||
done
|
||||
|
||||
bit2size[22]=$tmp
|
||||
|
@ -544,7 +560,7 @@ in_bits()
|
|||
local tmp=${bit2size[22]}
|
||||
bit2size[22]=$(( $tmp + ${#BETA[9]} + ((4 - (${#BETA[9]} % 4)) % 4) ))
|
||||
|
||||
for i in {0..22}
|
||||
for i in {0..11} {22..22}
|
||||
do
|
||||
ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \
|
||||
prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0
|
||||
|
|
|
@ -94,16 +94,6 @@ enum {
|
|||
TEST_OUT_BIT9,
|
||||
TEST_OUT_BIT10,
|
||||
TEST_OUT_BIT11,
|
||||
TEST_OUT_BIT12,
|
||||
TEST_OUT_BIT13,
|
||||
TEST_OUT_BIT14,
|
||||
TEST_OUT_BIT15,
|
||||
TEST_OUT_BIT16,
|
||||
TEST_OUT_BIT17,
|
||||
TEST_OUT_BIT18,
|
||||
TEST_OUT_BIT19,
|
||||
TEST_OUT_BIT20,
|
||||
TEST_OUT_BIT21,
|
||||
TEST_OUT_BIT22,
|
||||
TEST_OUT_FULL_SUPP_TRACE,
|
||||
|
||||
|
@ -125,16 +115,6 @@ enum {
|
|||
TEST_IN_BIT9,
|
||||
TEST_IN_BIT10,
|
||||
TEST_IN_BIT11,
|
||||
TEST_IN_BIT12,
|
||||
TEST_IN_BIT13,
|
||||
TEST_IN_BIT14,
|
||||
TEST_IN_BIT15,
|
||||
TEST_IN_BIT16,
|
||||
TEST_IN_BIT17,
|
||||
TEST_IN_BIT18,
|
||||
TEST_IN_BIT19,
|
||||
TEST_IN_BIT20,
|
||||
TEST_IN_BIT21,
|
||||
TEST_IN_BIT22,
|
||||
TEST_IN_FULL_SUPP_TRACE,
|
||||
|
||||
|
@ -199,30 +179,6 @@ static int check_ioam_header(int tid, struct ioam6_trace_hdr *ioam6h,
|
|||
ioam6h->nodelen != 2 ||
|
||||
ioam6h->remlen;
|
||||
|
||||
case TEST_OUT_BIT12:
|
||||
case TEST_IN_BIT12:
|
||||
case TEST_OUT_BIT13:
|
||||
case TEST_IN_BIT13:
|
||||
case TEST_OUT_BIT14:
|
||||
case TEST_IN_BIT14:
|
||||
case TEST_OUT_BIT15:
|
||||
case TEST_IN_BIT15:
|
||||
case TEST_OUT_BIT16:
|
||||
case TEST_IN_BIT16:
|
||||
case TEST_OUT_BIT17:
|
||||
case TEST_IN_BIT17:
|
||||
case TEST_OUT_BIT18:
|
||||
case TEST_IN_BIT18:
|
||||
case TEST_OUT_BIT19:
|
||||
case TEST_IN_BIT19:
|
||||
case TEST_OUT_BIT20:
|
||||
case TEST_IN_BIT20:
|
||||
case TEST_OUT_BIT21:
|
||||
case TEST_IN_BIT21:
|
||||
return ioam6h->overflow ||
|
||||
ioam6h->nodelen ||
|
||||
ioam6h->remlen != 1;
|
||||
|
||||
case TEST_OUT_BIT22:
|
||||
case TEST_IN_BIT22:
|
||||
return ioam6h->overflow ||
|
||||
|
@ -326,6 +282,66 @@ static int check_ioam6_data(__u8 **p, struct ioam6_trace_hdr *ioam6h,
|
|||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit12) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit13) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit14) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit15) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit16) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit17) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit18) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit19) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit20) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit21) {
|
||||
if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff)
|
||||
return 1;
|
||||
*p += sizeof(__u32);
|
||||
}
|
||||
|
||||
if (ioam6h->type.bit22) {
|
||||
len = cnf.sc_data ? strlen(cnf.sc_data) : 0;
|
||||
aligned = cnf.sc_data ? __ALIGN_KERNEL(len, 4) : 0;
|
||||
|
@ -455,26 +471,6 @@ static int str2id(const char *tname)
|
|||
return TEST_OUT_BIT10;
|
||||
if (!strcmp("out_bit11", tname))
|
||||
return TEST_OUT_BIT11;
|
||||
if (!strcmp("out_bit12", tname))
|
||||
return TEST_OUT_BIT12;
|
||||
if (!strcmp("out_bit13", tname))
|
||||
return TEST_OUT_BIT13;
|
||||
if (!strcmp("out_bit14", tname))
|
||||
return TEST_OUT_BIT14;
|
||||
if (!strcmp("out_bit15", tname))
|
||||
return TEST_OUT_BIT15;
|
||||
if (!strcmp("out_bit16", tname))
|
||||
return TEST_OUT_BIT16;
|
||||
if (!strcmp("out_bit17", tname))
|
||||
return TEST_OUT_BIT17;
|
||||
if (!strcmp("out_bit18", tname))
|
||||
return TEST_OUT_BIT18;
|
||||
if (!strcmp("out_bit19", tname))
|
||||
return TEST_OUT_BIT19;
|
||||
if (!strcmp("out_bit20", tname))
|
||||
return TEST_OUT_BIT20;
|
||||
if (!strcmp("out_bit21", tname))
|
||||
return TEST_OUT_BIT21;
|
||||
if (!strcmp("out_bit22", tname))
|
||||
return TEST_OUT_BIT22;
|
||||
if (!strcmp("out_full_supp_trace", tname))
|
||||
|
@ -509,26 +505,6 @@ static int str2id(const char *tname)
|
|||
return TEST_IN_BIT10;
|
||||
if (!strcmp("in_bit11", tname))
|
||||
return TEST_IN_BIT11;
|
||||
if (!strcmp("in_bit12", tname))
|
||||
return TEST_IN_BIT12;
|
||||
if (!strcmp("in_bit13", tname))
|
||||
return TEST_IN_BIT13;
|
||||
if (!strcmp("in_bit14", tname))
|
||||
return TEST_IN_BIT14;
|
||||
if (!strcmp("in_bit15", tname))
|
||||
return TEST_IN_BIT15;
|
||||
if (!strcmp("in_bit16", tname))
|
||||
return TEST_IN_BIT16;
|
||||
if (!strcmp("in_bit17", tname))
|
||||
return TEST_IN_BIT17;
|
||||
if (!strcmp("in_bit18", tname))
|
||||
return TEST_IN_BIT18;
|
||||
if (!strcmp("in_bit19", tname))
|
||||
return TEST_IN_BIT19;
|
||||
if (!strcmp("in_bit20", tname))
|
||||
return TEST_IN_BIT20;
|
||||
if (!strcmp("in_bit21", tname))
|
||||
return TEST_IN_BIT21;
|
||||
if (!strcmp("in_bit22", tname))
|
||||
return TEST_IN_BIT22;
|
||||
if (!strcmp("in_full_supp_trace", tname))
|
||||
|
@ -606,16 +582,6 @@ static int (*func[__TEST_MAX])(int, struct ioam6_trace_hdr *, __u32, __u16) = {
|
|||
[TEST_OUT_BIT9] = check_ioam_header_and_data,
|
||||
[TEST_OUT_BIT10] = check_ioam_header_and_data,
|
||||
[TEST_OUT_BIT11] = check_ioam_header_and_data,
|
||||
[TEST_OUT_BIT12] = check_ioam_header,
|
||||
[TEST_OUT_BIT13] = check_ioam_header,
|
||||
[TEST_OUT_BIT14] = check_ioam_header,
|
||||
[TEST_OUT_BIT15] = check_ioam_header,
|
||||
[TEST_OUT_BIT16] = check_ioam_header,
|
||||
[TEST_OUT_BIT17] = check_ioam_header,
|
||||
[TEST_OUT_BIT18] = check_ioam_header,
|
||||
[TEST_OUT_BIT19] = check_ioam_header,
|
||||
[TEST_OUT_BIT20] = check_ioam_header,
|
||||
[TEST_OUT_BIT21] = check_ioam_header,
|
||||
[TEST_OUT_BIT22] = check_ioam_header_and_data,
|
||||
[TEST_OUT_FULL_SUPP_TRACE] = check_ioam_header_and_data,
|
||||
[TEST_IN_UNDEF_NS] = check_ioam_header,
|
||||
|
@ -633,16 +599,6 @@ static int (*func[__TEST_MAX])(int, struct ioam6_trace_hdr *, __u32, __u16) = {
|
|||
[TEST_IN_BIT9] = check_ioam_header_and_data,
|
||||
[TEST_IN_BIT10] = check_ioam_header_and_data,
|
||||
[TEST_IN_BIT11] = check_ioam_header_and_data,
|
||||
[TEST_IN_BIT12] = check_ioam_header,
|
||||
[TEST_IN_BIT13] = check_ioam_header,
|
||||
[TEST_IN_BIT14] = check_ioam_header,
|
||||
[TEST_IN_BIT15] = check_ioam_header,
|
||||
[TEST_IN_BIT16] = check_ioam_header,
|
||||
[TEST_IN_BIT17] = check_ioam_header,
|
||||
[TEST_IN_BIT18] = check_ioam_header,
|
||||
[TEST_IN_BIT19] = check_ioam_header,
|
||||
[TEST_IN_BIT20] = check_ioam_header,
|
||||
[TEST_IN_BIT21] = check_ioam_header,
|
||||
[TEST_IN_BIT22] = check_ioam_header_and_data,
|
||||
[TEST_IN_FULL_SUPP_TRACE] = check_ioam_header_and_data,
|
||||
[TEST_FWD_FULL_SUPP_TRACE] = check_ioam_header_and_data,
|
||||
|
|
Loading…
Add table
Reference in a new issue