Showing posts with label EIGRP. Show all posts
Tuesday, 20 May 2014
EIGRP
,
GNS3
,
IPv6
,
Networking
No comments
IPv6 Tunnel over IPv4 Network
Posted by
Unknown,
on
18:43
This topology is a example of Manually Configured Tunnel (MCT) which is a type of Static Point-to-Point IPv6 Tunnel. This lab is based on Manually Configured tunnel in which Tunnel mode is ipv6ip. RFC 4213 is made for this MCT tunnel.
There are two types of Static Point-to-Point tunnel :-
1. Manually Configured Tunnels (MCT)
2. Generic Routing Encapsulation (GRE) tunnels
In order to support the IGP`s and other features in these static tunnels router will assign link local addresses on these links and allow the forwarding of IPv6 multicast traffic.
This topology is clearing the concept that, if Router has only IPv6 addresses over it and no IPv4 addresses then we have to give Router-ID in the routing protocol otherwise that router will not communicate with the other routers because it will not have any route to other routers.
So here Router R5 and R6 has the Router-ID 5.5.5.5 and 6.6.6.6 .
Tunneling is one of the way to communicate IPv6 with IPv4 network. In this topology IPv6 tunnel is made from Router R1 to R4, and this tunnel has the IP address from subnet 2003::/64.
In GNS3, IPv6 routing have to be enabled on interface, it can not enabled by the network command in routing protocol.
IPv6 Tunnel
On Router R1- interface Tunnel0 |
no ip address
ip mtu 1000
ipv6 address 2003::1/64
ipv6 eigrp 2
tunnel source FastEthernet0/0
tunnel destination 10.3.0.2
tunnel mode ipv6ip
On Router R4-
interface Tunnel0
no ip address
ip mtu 1000
ipv6 address 2003::2/64
ipv6 eigrp 2
tunnel source FastEthernet0/0
tunnel destination 10.1.0.1
tunnel mode ipv6ip
Router-ID for Router R5 :-
ipv6 router eigrp 2eigrp router-id 5.5.5.5
Router-ID for Router R6 :-
ipv6 router eigrp 2
eigrp router-id 6.6.6.6
Verification of the tunnel :-
R1#show interfaces tunnel 0
Tunnel0 is up, line protocol is up
Hardware is Tunnel
MTU 17920 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source 10.1.0.1 (FastEthernet0/0), destination 10.3.0.2
Tunnel protocol/transport IPv6/IP
Tunnel TTL 255
Tunnel transport MTU 1480 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input 00:00:00, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
386 packets input, 60398 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
392 packets output, 40546 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
Tunnel0 is up, line protocol is up
Hardware is Tunnel
MTU 17920 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source 10.1.0.1 (FastEthernet0/0), destination 10.3.0.2
Tunnel protocol/transport IPv6/IP
Tunnel TTL 255
Tunnel transport MTU 1480 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input 00:00:00, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
386 packets input, 60398 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
392 packets output, 40546 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
The highlighted texts shows that tunnel interface is up, the output confirms the source and destination IPv4 addresses. It also confirms that the tunnel mode uses IPv6 over IP, tunnel transport MTU is 1480 bytes for MCT for others, like GRE it is 1476 bytes. In case of MCT link local addresses for interface is based on FE80::/96, plus 32 bits from tunnel source IPv4 address.
GRE Tunnel
In this topology i changed the tunnel mode from ipv6ip to gre ip. Firstly remove tunnel mode ipv6ip by applying no tunnel mode ipv6ip command and then apply tunnel mode gre ip command.
R1#show interfaces tunnel 0
Tunnel0 is up, line protocol is up
Hardware is Tunnel
MTU 17916 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source 10.1.0.1 (FastEthernet0/0), destination 10.3.0.2
Tunnel protocol/transport GRE/IP
Key disabled, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255
Fast tunneling enabled
Tunnel transport MTU 1476 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
700 packets input, 104149 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
711 packets output, 73539 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
Tunnel0 is up, line protocol is up
Hardware is Tunnel
MTU 17916 bytes, BW 100 Kbit/sec, DLY 50000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation TUNNEL, loopback not set
Keepalive not set
Tunnel source 10.1.0.1 (FastEthernet0/0), destination 10.3.0.2
Tunnel protocol/transport GRE/IP
Key disabled, sequencing disabled
Checksumming of packets disabled
Tunnel TTL 255
Fast tunneling enabled
Tunnel transport MTU 1476 bytes
Tunnel transmit bandwidth 8000 (kbps)
Tunnel receive bandwidth 8000 (kbps)
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/0 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
700 packets input, 104149 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
711 packets output, 73539 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 output buffer failures, 0 output buffers swapped out
Here this output is showing that the protocol used for the tunnel is GRE/IP and transfer MTU is of size 1476 bytes which is 4 bytes lesser than IPv6IP. This 4 bytes is the additional GRE header. In case of GRE tunnel Link local address for interfaces is based on IPv6 EUI-64, using lowest numbered interface`s MAC address. RFC 2784 is used for GRE.
R5#ping 2002::1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2002::1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 72/106/212 ms
Ping from Router R6 to R5
R6#ping 2001::1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2001::1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 68/78/88 ms
Traceroute from R5 to R6
R5#traceroute 2002::1
Type escape sequence to abort.
Tracing the route to 2002::1
1 2001::2 56 msec 8 msec *
2 2003::2 80 msec 72 msec 56 msec
3 2002::1 104 msec 76 msec 44 msec
Configuration File config.zip
If you are interested in procuring the .net file for GNS3 then enter your email id in comment box.
By :- Vishal Sharma
Friday, 16 May 2014
%DUAL-5-NBRCHANGE: IP-EIGRP(0)
Posted by
Unknown,
on
19:42
I searched everywhere on internet about this error. I faced this error yesterday, on internet i found that this error is due to MTU packet size. It is quite surprising for me that MTU is creating problem, because I have learned that router perform fragmentation, segmentation if packet is large. But I found that these processes does not occur in this case. Because all EIGRP packets are trasported as IP protocol type 88 using Reliable Transport Protocol (RTP), not TCP or UDP, while Fragmentation is the mechanism in TCP transport.
EIGRP has a problem of flapping (up/down) in some topologies. It makes the network unstable, which creates problem in communication. This problem shows errors like this :-
%DUAL-5-NBRCHANGE: IPv6-EIGRP(0) 1: Neighbor FE80::2 (Tunnel0) is up: new adjacency
%DUAL-5-NBRCHANGE: IPv6-EIGRP(0) 1: Neighbor FE80::2 (Tunnel0) is down: holding time expired
It is the case of IPv6 tunneling.
Same error is also given in IPv4 likewise :-
%DUAL-5-NBRCHANGE: EIGRP-IPv4 1: Neighbor 192.168.1.1 (Tunnel3) is up: new adjacency
Basically this error is due to the EIGRP, By default, MTU size is 1500 in each router, this MTU size creates the problem in communication. So it can be solved by reducing the MTU size.
So in last I got this explanation for this error :-
Two routers peer with MTU of 1500, but in the path between them there is
a L2 hop that has a smaller MTU. 1492 for example. The two routers
will hello and establish neighbor with smaller than 1492 packets and
have no problem. The first time one of the routers tries to send routes
to the other router the packet will likely exceed the 1492 MTU and be
dropped along the way. The router will retransmit several times and
never get the ack, then it will dump the neighbor relationship and
re-learn the neighbor - <repeat until someone figures the problem
out>. In a topology with multiple paths this type of problem can be
delayed in manifesting itself because EIGRP doesn't share routes with
neighbors unless they ask for them. I had this problem occur three days
after deploying a new L2 WAN circuit because there was no significant
EIGRP convergence event until 3 days later. Lesson learned -> always
"ping size 1500 df-bit x.x.x.x" new WAN paths.
Resource for this explanation :- https://learningnetwork.cisco.com/thread/43100#233367
By :- Vishal Sharma
Subscribe to:
Posts
(
Atom
)