Peplink Device Throughput

How to get the most out of your Peplink device

Whatever WAN connections you are using, it is always a good idea to test each individually and repeatedly to discover its maximum throughput in both directions. Remember, bandwidth availability can vary throughout the day, especially if using cellular or fixed lines with variable contention.
Customers are often surprised by how bandwidth availability can vary between links from the same ISP and how different actual bandwidth availability can be from that advertised by the ISP.

Peplink Device GroupStateful Firewall PepVPN / SpeedFusion
(No Encrypt)
PepVPN / SpeedFusion
(256bit AED)
Balance 20 Balance 150 Mbps 60 Mbps 30 Mbps
Balance 20X X 900 Mbps 100 Mbps 60 Mbps
Balance 30 LTE Balance 400 Mbps 120 Mbps 55 Mbps
Balance  30 Pro Balance 400 Mbps 120 Mbps 55 Mbps
Balance One / Core Balance 600/400 Mbps 60 Mbps 30 Mbps
Balance Two Balance 1000 Mbps 150 Mbps 150 Mbps
Balance 210 Balance 350 Mbps 150 Mbps 150 Mbps
Balance 310X X 2500 Mbps 600 Mbps500 Mbps
Balance 305 Balance 1000 Mbps150 Mbps 150 Mbps
Balance 380 Balance 1000 Mbps150 Mbps 150 Mbps
Balance 380X X 3000 Mbps 500 Mbps 500 Mbps
Balance 580 Balance 1500 Mbps 200 Mbps 200 Mbps
Balance 580X X 4000 Mbps 500 Mbps500 Mbps
Balance 710 Balance 2500 Mbps 400 Mbps 400 Mbps
Balance 1350 Balance 5000 Mbps 800 Mbps 800 Mbps
Balance 2500 Balance 8000 Mbps 2000 Mbps 2000 Mbps
Balance SDX X 12000 Mbps 1000 Mbps 600 Mbps
Balance SDX PRO X 24000 Mbps 1000 Mbps 600 Mbps
EPX X 30000 Mbps 4000 Mbps 2000 Mbps
MAX BR1 CLASSIC BR1 100 Mbps40 Mbps 20 Mbps
MAX BR1 MK2 BR1 100 Mbps 40 Mbps 20 Mbps
MAX BR1 MINI BR1 100 Mbps 40 Mbps 20 Mbps
MAX BR1 M2M BR1 100 Mbps 40 Mbps 20 Mbps
MAX BR1 SLIM BR1 100 Mbps 40 Mbps 20 Mbps
MAX BR1 ENT BR1 300 Mbps 100 Mbps 60 Mbps
MAX BR1 PRO BR1 120 Mbps 40 Mbps 20 Mbps
MAX HOTSPOT BR1 100 Mbps 40 Mbps 20 Mbps
MAX HD2 HD 400 Mbps 120 Mbps 55 Mbps
MAX HD2 MINI HD 300 Mbps 100 Mbps 60 Mbps
MAX HD4 HD 400 Mbps 150 Mbps 120 Mbps
MAX HD4 HD2 MBX HD 2500 Mbps 600 Mbps 500 Mbps
MAX HD2/HD4
WITH MEDIAFAST
HD 400 Mbps 150 Mbps 120 Mbps
MAX BR1 IP55 OD 100 Mbps 40 Mbps 20 Mbps
MAX BR2 IP55 OD 100 Mbps 40 Mbps 20 Mbps
MAX BR1 IP67 OD 100 Mbps 40 Mbps 20 Mbps
MAX HD2 IP67 OD 400 Mbps 100 Mbps 60 Mbps
MAX HD4 IP67 OD 400 Mbps 150 Mbps 120 Mbps
MAX HD1 DOME OD 600/150 Mbps 100 Mbps 60 Mbps
MAX HD2 DOME OD  1200/150 Mbps 100 Mbps 60 Mbps
PDXOD 2500 Mbps 600 Mbps 500 Mbps
Max Transit/ Duo Transit 400 Mbps 100 Mbps 60 Mbps
Max Transit Mini Transit 100 Mbps 40 Mbps 20 Mbps
SpeedFusion Engine SF 100 Mbps 40 Mbps 20 Mbps
SpeedFusion Engine Cam SF 400 Mbps 100 Mbps 60 Mbps
Surf Soho Soho 120 Mbps 40 Mbps 20 Mbps

Bandwidth Bonding Overhead

SpeedFusion Bandwidth Bonding overhead

SpeedFusion and the Internet Mix (IMIX)

Internet Mix (IMIX) is a measurement of typical internet traffic passing through network equipment, such as routers, switches, or firewalls. When measuring equipment performance using an IMIX of packets, performance is assumed to resemble what can be seen in real life.
The IMIX traffic profile is used in the industry to simulate real-world traffic patterns and packet distributions. IMIX profiles are based on statistical sampling done on internet routers. More information about IMIX can be found here: https://en.wikipedia.org/wiki/Internet_Mix

The IMIX Standard and the SpeedFusion Overhead

As the chart on the left shows, when a SpeedFusion VPN tunnel is used to transmit IMIX data (4084 bytes), an additional 960 bytes of SpeedFusion overhead is required.
The SpeedFusion overhead is 19% of the total transmitted data (IMIX + overhead). Since it uses a fixed number of bytes per packet transmitted (an additional 80 bytes), SpeedFusion Bandwidth Bonding is much more efficient when transmitting larger packet sizes. At packet sizes of 1500 bytes, SpeedFusion adds just 5% bandwidth overhead, but at packet sizes of 40 bytes, SpeedFusion overhead rises to 200%.

IMIX packets

Bandwidth aggregation over multiple links

Bandwidth aggregation over multiple links


Accounting for SpeedFusion bandwidth overhead and assuming the traffic passing across the links is similar to the IMIX standard mentioned previously, we can calculate available real-world bandwidth at the remote site:

Download: 10Mb + 10Mb = 20Mbps x (1 - 19%) = 16.2Mbps
Upload : 2Mb + 2Mb = 4Mbps x (1 - 19%) = 3.24Mbps

It is important to explain SpeedFusion bandwidth overhead to your end users so that they understand why they will not get full 20Mbps/4Mbps bandwidth when using VPN bonding. Remember, conventional VPN technology such as IPsec has an overhead of 14.6%. For only 4% of additional overhead, SpeedFusion provides bandwidth aggregation & WAN resilience.

InControl 2  is the management platform 

This means everything you configure on InControl is synced back to the local device configuration and processed by the local hardware. InControl doesn’t do any processing on it’s own, it’s for easy management, easy viewing and platform rollouts. Therefore more processing power on device level gives a clear advantage.

InControl2

My device doesn't reach the promised throughput ...

Please check below reasons/solutions to help you get the most of your device.

If a feature you enable is CPU intensive your throughput can be lower. Tasks like QoS and bandwidth management are CPU intensive, because all packets have to be inspected. So the more features you activate on your device, it can affect your device CPU load. For example, the Balance One - 5 WAN license reduce throughput of the Balance One and One Core from 600Mbps to 400Mbps.

Check first your throughput on a factory reset device with just a PepVPN tunnel. You should reach now the below promise throughput. Then test your required features:
SF bonding,
Bandwidth management
QoS packet inspection
Firewall rules
DNS proxy

On every router (and also every computer) the available resource are shared with all processes, it can only do so much at any given time. Please consider to upgrade your device, there are options like a trade-in or a return of your device.

When latency characteristics are the same across connected WAN links, it has very little effect on SpeedFusion bandwidth throughput. However, when the latency of WAN links varies considerably, bandwidth throughput is affected. In certain conditions, such as regular timed packet loss on the above LTE link, combined with typically high latency found on these types of links, the TCP protocol method of retransmitting lost packets can have a drastic effect on available bandwidth over the VPN. This is another reason why we recommend that, whenever possible, high latency links be used for failover and not as an active SpeedFusion WAN link.

In cases of high variation in WAN link latency, the best approach (assuming there is enough bandwidth on low latency links) is to allocate lower latency links for SpeedFusion while setting higher latency links as failover connections.
Another approach is to use higher latency links for specific direct traffic types that are not as
latency sensitive (like direct internet access) while reserving lower latency links for other important corporate traffic that needs to transit the VPN, such as VoIP and ERP.

The amount of bandwidth available on a LTE or satellite data connection is dependent on a number of factors:

● Signal Strength -Determined by the distance to the nearest cellular tower (or visibility of the satellite) and the subsequent signal quality received.

● Backhaul Bandwidth Availability- From the cellular tower to the ISP's core network or from the satellite ground station to the ISP's core network.

● Device Contention -At the tower or satellite you are connected to (determined by the number of active subscribers on a tower or satellite at any given moment).

We frequently see users who have 100% signal strength yet get only a small percentage of the bandwidth. This can be because there simply isn't enough backhaul bandwidth at a cell tower or ground station to the ISP's backbone, and what is available is saturated. When it is a matter of contention of the service, the easiest way to improve your chances of getting more bandwidth is to create another connection using an additional vSAT or cellular modem connection. This allows you to get twice as much of the time allocated to your devices, and the amount of bandwidth shared with you for your SpeedFusion connection is increased

Most internet connections are provided as a contended service. This means that although your provider has advised you will get up to 24Mbps broadband over DSL for example, depending on how oversubscribed your DSL service is (literally how many people in your area are connected to the ISP's service), the bandwidth that's actually available at any given moment could be considerably less.
The amount of bandwidth available on a contented service can vary considerably over the period of a day (and even minute to minute), with higher speeds possible during working hours compared to the evenings when your neighbours are home and using the same internet service heavily.
Adding an additional fixed line service from the same ISP can give you a 'bigger share' of the bandwidth that's available. 1 :1 contended services are available from ISPs to counter this issue but are naturally more expensive than 20:1 or 50:1 services.

New Zealand ISPs are operating a massive content cache platform. This is historically grown, as NZ is a bit far away from mainstream internet backbones. To avoid bottlenecks a lot of content is cached in New Zealand, means if you download a 12 GB OS image, you are not downloading it from overseas, instead the ISP DNS is redirecting to the country content cache and you get a download within NZ, which is not touching or congesting international links.

New Zealand’s in country content cache sends out packets with a payload close to jumbo frames (with a payload of 4.000 to 7.000 bytes per packet). Usually that won’t work on the internet as the MTU (maximum transfer unit per packet) is fixed to 1500 bytes (minus protocol overhead). These jumbo sized frames simply don’t fit into a VPN tunnel (from any vendor), means they need to be split (fragmented), buffered and sent in many, many small chunks through the tunnel, then re-assembled on the other side. This creates big side effects, not only on CPU load, but also on local buffer limits, lost fragments and therefore packet re-transmissions.

Workaround for now: The NZ ISPs are redirecting to the local content delivery via their own DNS (usually automatically assigned by the ISP to your WANs). If you take out the ISP DNS and replace them on all participating sides (including computers with own DNS setup) with public global DNS servers, then you wont getting redirecting to the national content cache platform. Instead you receive data from the nearest overseas server, which follows standard packet sizes.

How can we help you today?
We are just a click away ...

© Copyright 2022 ADVANTESCO - Sitemap
    All Rights Reserved