AtelierPPS2012 » Historique » Version 79
Version 78 (Mehdi Abaakouk, 28/04/2014 12:38) → Version 79/81 (Mehdi Abaakouk, 28/04/2014 12:39)
{{>toc}}
h1. AtelierPPS2012
Une attaque sur le réseau gitoyen a eu lieu le 18 juin et une sur tetaneutral.net le 29 juin, ces deux attaques etaient en "paquet par seconde" (PPS) avec de petits paquets de 50-60 byte qui saturent les CPU des routeurs logiciels.
L'idée est d'étudier via des recherches sur le web et des laboratoires/ateliers le comportement des routeurs logiciels dans ce cas la : limites atteintes en fonction du paramétrage et du matériel (carte réseau, CPU et fréquence).
h2. Liens
* http://lists.tetaneutral.net/pipermail/technique/2012-July/000406.html
* http://guerby.org/ftp/dos-tetaneutral-20120629-12h33-13h03-pps.png
* http://networkstatic.net/the-sdn-impact-on-net-neutrality/
* http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/
* http://www.spinics.net/lists/netdev/msg206077.html
** So with your patch, Eric's patch, and this most recent patch we are now at 11.8Mpps with 8 or 9 queues. At this point I am staring to hit the hardware limits since 82599 will typically max out at about 12Mpps w/ 9 queues.
** 12e6 * 64 byte * 8 = 6.1 Gbit/s
** PATCH Remove the ipv4 routing cache http://www.spinics.net/lists/netdev/msg205545.html
* Intel® 82599 10 Gigabit Ethernet Controller http://ark.intel.com/products/series/32609
* more interrupts (lower performance) in bare-metal compared with running VM https://lkml.org/lkml/2012/7/27/490
100 Mbit/s = 195312 frames de 64 byte/s
1000 Mbit/s = 1953125 frames de 64 byte/s
* http://dpdk.org/ml/archives/dev/2013-May/000102.html
** In case of 64 byte packets (with Ethernet CRC), (64+20)*8 = 672 bits. So line rate is 10000/672 = 14.88 Mpps.
** Intel Data Plane Development Kit (Intel® DPDK) Overview Packet Processing on Intel® Architecture http://www.intel.com/content/dam/www/public/us/en/documents/presentation/dpdk-packet-processing-ia-overview-presentation.pdf
* http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/packet-processing-is-enhanced-with-software-from-intel-dpdk.html
** 80 Mpps par processeur Xeon
** http://www.intel.com/content/www/us/en/communications/communications-packet-processing-brief.html
* discussion choix d'un routeur et attaque PPS : http://www.mail-archive.com/frnog@frnog.org/msg19673.html
* projet netmap http://info.iet.unipi.it/~luigi/netmap/
** http://lwn.net/Articles/484323/
** http://info.iet.unipi.it/~luigi/papers/20120503-netmap-atc12.pdf
*** "In our prototype, a single core running at 900 MHz can send or receive 14.88 Mpps (the peak packet rate on 10 Gbit/s links). This is more than 20 times faster than conventional APIs."
** http://info.iet.unipi.it/~luigi/netmap/20110729-rizzo-infocom.pdf
** VALE, a Virtual Local Ethernet http://info.iet.unipi.it/~luigi/vale/
*** http://info.iet.unipi.it/~luigi/papers/20120608-vale.pdf
*** " Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines such as QEMU, KVM and others, as well as regular processes, to achieve over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance"
** Towards a Billion Routing Lookups per Second in Software http://info.iet.unipi.it/~luigi/papers/20120601-dxr.pdf
** http://info.iet.unipi.it/~luigi/netmap/talk-hp.html
** http://marc.info/?a=133836981100006&r=1&w=4
** 10 Gbit/s Line Rate Packet Processing Using Commodity Hardware: Survey and new Proposals http://luca.ntop.org/10g.pdf
* http://www.intel.com/content/www/us/en/ethernet-controllers/82599-10-gbe-controller-datasheet.html
* ipfw 9-10 Mpps http://lists.freebsd.org/pipermail/freebsd-net/2012-July/032869.html
* projet PFQ
** http://netgroup.iet.unipi.it/software/pfq/index.html
* Ubiquity EdgeMax router
** http://www.ubnt.com/edgemax
** http://forum.ubnt.com/showthread.php?t=59312
** http://dl.ubnt.com/Tolly212127UbiquitiEdgeRouterLitePricePerformance.pdf
** http://dl.ubnt.com/Tolly212128UbiquitiEdgeRouterLitePricePerformanceVsMikroTik.pdf
* http://dpdk.org/
** Intel DPDK: Data Plane Development Kit
** Intel DPDK is a set of libraries and drivers for fast packet processing on x86 platforms. It runs mostly in Linux userland.
* http://www.slideshare.net/shemminger/uio-final
** Networking in Userspace : Living on the edge
* http://tech.slashdot.org/story/13/04/17/2014206/vint-cerf-sdn-is-a-model-for-a-better-internet
** http://slashdot.org/topic/datacenter/vint-cerf-sdn-is-a-model-for-a-better-internet/
* http://www.opendaylight.org/
** OpenDaylight's mission is to facilitate a community-led, industry-supported open source framework, including code and architecture, to accelerate and advance a common, robust Software-Defined Networking platform
* http://www.packetdam.com/
* http://www.cisco.com/web/partners/downloads/765/tools/quickreference/routerperformance.pdf
* http://osdir.com/ml/linux.drivers.e1000.devel/2007-05/msg00182.html
** "The network cards are perfectly capable of achieving much higher numbers than 135k pps. The linux network stack however is currently not."
* http://code.google.com/p/openpgm/
* http://afresh1.com/OpenBSD_49_Throughput_Latency/
* http://code.ettus.com/redmine/ettus/projects/public/wiki/Latency
* 10Gbps Open Source Routing » de Bengt Gördén, Olof Hagsand et Robert Olsson http://www.iis.se/docs/10G-OS-router_2_.pdf
* http://fr.slideshare.net/brouer/linuxcon2009-10gbits-bidirectional-routing-on-standard-hardware-running-linux
* 10 Gbit Hardware Packet Filtering Using Commodity Network Adapters http://ripe61.ripe.net/presentations/138-Deri_RIPE_61.pdf
* https://wiki.freebsd.org/NetworkPerformanceTuning
* http://wiki.networksecuritytoolkit.org/nstwiki/index.php/LAN_Ethernet_Maximum_Rates,_Generation,_Capturing_%26_Monitoring
* http://www.cisco.com/web/about/security/intelligence/network_performance_metrics.html
* http://blog.erratasec.com/2013/12/ccc-100-gbps-and-your-own-private-shodan.html
* https://github.com/robertdavidgraham/masscan
* http://www.ntop.org/products/pf_ring/
* http://routebricks.org/pubs.html
* http://lwn.net/Articles/542643/
** Chelsio's T5 asic moves the architecture into 40GbE speeds. T5 is a 10/40GbE controller with full offload support of a complete Unified Wire solution comprising NIC, Virtualization, TOE, iWARP RDMA and FCoE.
** http://dpdk.org/ml/archives/dev/2014-January/001111.html fix atomic and out of order execution
* http://blog.erratasec.com/2013/10/whats-max-speed-on-ethernet.html
** What's the max speed on Ethernet?
* http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a_superserver_5018a-ftn4
h2. Personnes interessées
# Laurent GUERBY
# Obinou (qui a déjà utilisé PF-RING et NTOP)
A priori il suffit de deux machines pour pouvoir commencer chez soi.
h2. Tests
e1000e D2500CC (squeeze) et core i5 DQ67SW (squeeze + kernel 3.2bpo)
iperf plafonne a 120-130k pps
h2. Note sileht dpdk:
Extract from: http://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-dpdk-getting-started-guide.pdf
h3. configuration hugepages:
* 2M (1024*2k): hugepages=1024 (at runtime: echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages)
* 4G (4x1G): default_hugepagesz=1G hugepagesz=1G hugepages=4 (only works at boot time via grub)
<pre>
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
</pre>
h3. Compile and load modules:
_Note: source tools/setup.sh is a helper tools for this but works only with IGB driver not e1000e_ this_
<pre>
# make T=x86_64-default-linuxapp-gcc
..
Build complete
# modprobe uio (I think this is not useful)
# insmod build/kmod/rte_kni.ko (I think this is not useful)
# insmod build/kmod/igb_uio.ko (I think this is not useful)
# ./tools/pci_unbind.py --status
Network devices using IGB_UIO driver
====================================
<none>
Network devices using kernel driver
===================================
0000:00:19.0 'Ethernet Connection I217-LM' if=eth1 drv=e1000e unused=<none> *Active*
0000:04:00.0 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller' if=eth0 drv=r8169 unused=<none>
Other network devices
=====================
<none>
# ip link set eth1 down
# lspci|grep -i 'Ethernet.*Intel'
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM (rev 05)
# ./tools/pci_unbind.py --bind=e1000e 00:19.0
</pre>
h3. Prepare examples programs:
<pre>
# export RTE_SDK=/root/sileht/dpdk-1.6.0r1
# export RTE_TARGET=build
# cd /root/sileht/
# cp -r $RTE_SDK/examples/helloworld my_rte_app
# cd my_rte_app
# make
</pre>
h3. Tests
<pre>
</pre>
h1. AtelierPPS2012
Une attaque sur le réseau gitoyen a eu lieu le 18 juin et une sur tetaneutral.net le 29 juin, ces deux attaques etaient en "paquet par seconde" (PPS) avec de petits paquets de 50-60 byte qui saturent les CPU des routeurs logiciels.
L'idée est d'étudier via des recherches sur le web et des laboratoires/ateliers le comportement des routeurs logiciels dans ce cas la : limites atteintes en fonction du paramétrage et du matériel (carte réseau, CPU et fréquence).
h2. Liens
* http://lists.tetaneutral.net/pipermail/technique/2012-July/000406.html
* http://guerby.org/ftp/dos-tetaneutral-20120629-12h33-13h03-pps.png
* http://networkstatic.net/the-sdn-impact-on-net-neutrality/
* http://blog.exceliance.fr/2012/04/24/hypervisors-virtual-network-performance-comparison-from-a-virtualized-load-balancer-point-of-view/
* http://www.spinics.net/lists/netdev/msg206077.html
** So with your patch, Eric's patch, and this most recent patch we are now at 11.8Mpps with 8 or 9 queues. At this point I am staring to hit the hardware limits since 82599 will typically max out at about 12Mpps w/ 9 queues.
** 12e6 * 64 byte * 8 = 6.1 Gbit/s
** PATCH Remove the ipv4 routing cache http://www.spinics.net/lists/netdev/msg205545.html
* Intel® 82599 10 Gigabit Ethernet Controller http://ark.intel.com/products/series/32609
* more interrupts (lower performance) in bare-metal compared with running VM https://lkml.org/lkml/2012/7/27/490
100 Mbit/s = 195312 frames de 64 byte/s
1000 Mbit/s = 1953125 frames de 64 byte/s
* http://dpdk.org/ml/archives/dev/2013-May/000102.html
** In case of 64 byte packets (with Ethernet CRC), (64+20)*8 = 672 bits. So line rate is 10000/672 = 14.88 Mpps.
** Intel Data Plane Development Kit (Intel® DPDK) Overview Packet Processing on Intel® Architecture http://www.intel.com/content/dam/www/public/us/en/documents/presentation/dpdk-packet-processing-ia-overview-presentation.pdf
* http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/packet-processing-is-enhanced-with-software-from-intel-dpdk.html
** 80 Mpps par processeur Xeon
** http://www.intel.com/content/www/us/en/communications/communications-packet-processing-brief.html
* discussion choix d'un routeur et attaque PPS : http://www.mail-archive.com/frnog@frnog.org/msg19673.html
* projet netmap http://info.iet.unipi.it/~luigi/netmap/
** http://lwn.net/Articles/484323/
** http://info.iet.unipi.it/~luigi/papers/20120503-netmap-atc12.pdf
*** "In our prototype, a single core running at 900 MHz can send or receive 14.88 Mpps (the peak packet rate on 10 Gbit/s links). This is more than 20 times faster than conventional APIs."
** http://info.iet.unipi.it/~luigi/netmap/20110729-rizzo-infocom.pdf
** VALE, a Virtual Local Ethernet http://info.iet.unipi.it/~luigi/vale/
*** http://info.iet.unipi.it/~luigi/papers/20120608-vale.pdf
*** " Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines such as QEMU, KVM and others, as well as regular processes, to achieve over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance"
** Towards a Billion Routing Lookups per Second in Software http://info.iet.unipi.it/~luigi/papers/20120601-dxr.pdf
** http://info.iet.unipi.it/~luigi/netmap/talk-hp.html
** http://marc.info/?a=133836981100006&r=1&w=4
** 10 Gbit/s Line Rate Packet Processing Using Commodity Hardware: Survey and new Proposals http://luca.ntop.org/10g.pdf
* http://www.intel.com/content/www/us/en/ethernet-controllers/82599-10-gbe-controller-datasheet.html
* ipfw 9-10 Mpps http://lists.freebsd.org/pipermail/freebsd-net/2012-July/032869.html
* projet PFQ
** http://netgroup.iet.unipi.it/software/pfq/index.html
* Ubiquity EdgeMax router
** http://www.ubnt.com/edgemax
** http://forum.ubnt.com/showthread.php?t=59312
** http://dl.ubnt.com/Tolly212127UbiquitiEdgeRouterLitePricePerformance.pdf
** http://dl.ubnt.com/Tolly212128UbiquitiEdgeRouterLitePricePerformanceVsMikroTik.pdf
* http://dpdk.org/
** Intel DPDK: Data Plane Development Kit
** Intel DPDK is a set of libraries and drivers for fast packet processing on x86 platforms. It runs mostly in Linux userland.
* http://www.slideshare.net/shemminger/uio-final
** Networking in Userspace : Living on the edge
* http://tech.slashdot.org/story/13/04/17/2014206/vint-cerf-sdn-is-a-model-for-a-better-internet
** http://slashdot.org/topic/datacenter/vint-cerf-sdn-is-a-model-for-a-better-internet/
* http://www.opendaylight.org/
** OpenDaylight's mission is to facilitate a community-led, industry-supported open source framework, including code and architecture, to accelerate and advance a common, robust Software-Defined Networking platform
* http://www.packetdam.com/
* http://www.cisco.com/web/partners/downloads/765/tools/quickreference/routerperformance.pdf
* http://osdir.com/ml/linux.drivers.e1000.devel/2007-05/msg00182.html
** "The network cards are perfectly capable of achieving much higher numbers than 135k pps. The linux network stack however is currently not."
* http://code.google.com/p/openpgm/
* http://afresh1.com/OpenBSD_49_Throughput_Latency/
* http://code.ettus.com/redmine/ettus/projects/public/wiki/Latency
* 10Gbps Open Source Routing » de Bengt Gördén, Olof Hagsand et Robert Olsson http://www.iis.se/docs/10G-OS-router_2_.pdf
* http://fr.slideshare.net/brouer/linuxcon2009-10gbits-bidirectional-routing-on-standard-hardware-running-linux
* 10 Gbit Hardware Packet Filtering Using Commodity Network Adapters http://ripe61.ripe.net/presentations/138-Deri_RIPE_61.pdf
* https://wiki.freebsd.org/NetworkPerformanceTuning
* http://wiki.networksecuritytoolkit.org/nstwiki/index.php/LAN_Ethernet_Maximum_Rates,_Generation,_Capturing_%26_Monitoring
* http://www.cisco.com/web/about/security/intelligence/network_performance_metrics.html
* http://blog.erratasec.com/2013/12/ccc-100-gbps-and-your-own-private-shodan.html
* https://github.com/robertdavidgraham/masscan
* http://www.ntop.org/products/pf_ring/
* http://routebricks.org/pubs.html
* http://lwn.net/Articles/542643/
** Chelsio's T5 asic moves the architecture into 40GbE speeds. T5 is a 10/40GbE controller with full offload support of a complete Unified Wire solution comprising NIC, Virtualization, TOE, iWARP RDMA and FCoE.
** http://dpdk.org/ml/archives/dev/2014-January/001111.html fix atomic and out of order execution
* http://blog.erratasec.com/2013/10/whats-max-speed-on-ethernet.html
** What's the max speed on Ethernet?
* http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a_superserver_5018a-ftn4
h2. Personnes interessées
# Laurent GUERBY
# Obinou (qui a déjà utilisé PF-RING et NTOP)
A priori il suffit de deux machines pour pouvoir commencer chez soi.
h2. Tests
e1000e D2500CC (squeeze) et core i5 DQ67SW (squeeze + kernel 3.2bpo)
iperf plafonne a 120-130k pps
h2. Note sileht dpdk:
Extract from: http://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-dpdk-getting-started-guide.pdf
h3. configuration hugepages:
* 2M (1024*2k): hugepages=1024 (at runtime: echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages)
* 4G (4x1G): default_hugepagesz=1G hugepagesz=1G hugepages=4 (only works at boot time via grub)
<pre>
mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
</pre>
h3. Compile and load modules:
_Note: source tools/setup.sh is a helper tools for this but works only with IGB driver not e1000e_ this_
<pre>
# make T=x86_64-default-linuxapp-gcc
..
Build complete
# modprobe uio (I think this is not useful)
# insmod build/kmod/rte_kni.ko (I think this is not useful)
# insmod build/kmod/igb_uio.ko (I think this is not useful)
# ./tools/pci_unbind.py --status
Network devices using IGB_UIO driver
====================================
<none>
Network devices using kernel driver
===================================
0000:00:19.0 'Ethernet Connection I217-LM' if=eth1 drv=e1000e unused=<none> *Active*
0000:04:00.0 'RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller' if=eth0 drv=r8169 unused=<none>
Other network devices
=====================
<none>
# ip link set eth1 down
# lspci|grep -i 'Ethernet.*Intel'
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM (rev 05)
# ./tools/pci_unbind.py --bind=e1000e 00:19.0
</pre>
h3. Prepare examples programs:
<pre>
# export RTE_SDK=/root/sileht/dpdk-1.6.0r1
# export RTE_TARGET=build
# cd /root/sileht/
# cp -r $RTE_SDK/examples/helloworld my_rte_app
# cd my_rte_app
# make
</pre>
h3. Tests
<pre>
</pre>