Picking up the trail
I have to say that when I was writing part1 of the post, I didn’t expect things to unravel as they did. I thought I would be able to find the time to test the code with 2.10.4, try to find any changes to the API and then provide specific examples. Also show a typical setup for the Netbox installation procedure, both for regular install and Docker and then that would be it.
And then.. this happened:
https://www.networktocode.com/nautobot/ . NTC forked Netbox into Nautobot, and launched this project with a different mindset than the one behind Netbox. The whole announcement took the community by surprise, and then confusion followed. Soon after the announcement, there was a presentation at NFD24 where representatives of NTC gave a few hours of sessions explaining what Nautobot is and describing several features that distinguish it from Netbox. Still no clear explanation about what had happened though, so there was a bit of drama in the social media, Twitter and Webex channels mostly. The deletion of commit history from the forked project fed into the drama, so a clear-up blog post followed by Jason Edelman, trying to set the record straight about why they forked the project. The commit history was also restored as is mentioned in the post.
I have no purpose in taking sides in this, assuming there are sides to be taken. I think that at least a large number of users think so, from what I have read in the social media and what posts were made by the main actors. I do feel there is a strong possibility that all this may be harmful to the community. On the other hand it could be just a normal step in the life of an open source project. Let me remind that Jeremy Stretch, the lead maintainer of Netbox, considered also to be its creator (although there were as I learned 154 people who were contributing code to the project), left NTC, I am assuming because of the difference in opinions about Netbox’s future, and founded his own company Netverity, providing also paid support for Netbox. Although I read rumors about his intention to change the Netbox license to a source-available license, I could not verify that, as the license on the Netbox github repo is still the Apache one (at least it still was when I wrote this).
The next chapter in this story was that both Netbox and the Netbox-docker channels moved out of the NTC Slack server and went over to https://netdev-community.slack.com/. Both of the links above are invites to that Slack Server. So if you want to join them, browse for Netbox channels and add the ones you want.
In the last chapter of this story, Jeremy announced that a lot of users have joined the new channel and vowed to continue supporting the project (sorry I can’t find where I read that), although he had previously said that he could not continue spending as much time on it as before.
Decisions, decisions..
However, even though I didn’t want to take sides, I still had to decide what to do and what path to follow along with me workmate, who has contributed a lot to whatever this series of post is trying to describe: How we onboarded the network onto Netbox. We were not the only ones of course, I know of a lot of teams that were put in a difficult position because of this, as they had active projects using Netbox as a building block and were forced to choose a path for the future without the luxury of time. You see it was made clear by NTC, that you can migrate your data to Nautobot only from Netbox versions 2.10.3 – 2.10.4 (they ended up adding a few versions to that support list eventually). What NTC meant by that of course, is that they have created a plugin for Nautobot that handles the migration process, provided you have followed a simple procedure to extract all your data from Netbox into a json file:
python netbox/manage.py dumpdata \
--traceback --format=json \
--exclude admin.logentry --exclude sessions.session \
--exclude extras.ObjectChange --exclude extras.Script --exclude extras.Report \
> /tmp/netbox_data.json
Hold on.. what did you mean you didn’t find the time?
Well.. as it happened before, where the Netbox-docker project changed its architecture right before I was planning to write part1 and that forced me into spending time to test and find a new architecture, once again now I was forced to tackle the challenge ahead and figure out what I am going to do, stick with Netbox that had already moved to newer versions or migrate to Nautobot and keep evolving with that project?
That was no easy decision. The first thing I decided was to stop upgrading Netbox and stay with version 2.10.4 for the moment, so I can keep my options open for a little while longer. The second thing was to try and setup Nautobot, in order to test it.
So I joined the new channel for Nautbot in the NTC slack and starting poking around. Soon after I was told by the devs and other users, that I could use a dev edition of a docker-compose version that could setup Nautobot for docker. I decided to give that a try, try to migrate data and play a bit both with Nautobot as well as the SDK.
If you want to try it out before you read the rest of the story, there is an all-in-one version called nautobot-lab. This is the github repo for it: https://github.com/nautobot/nautobot-lab .
So did you migrate already?
Well not quite.. I am still using Netbox, all four installations that I have already setup (one Prod, one DR, one test and one non-Docker, all in sync using pgdumps and imports). But at the same time, I wanted to check if everything works the same or similar with Netbox and if those extra features can give me what I want: A faster and easier way to the automaton I want to create, with Netbox/Nautobot in the center. I will elaborate further on that later.
Before deciding to try the plugin, I did some exports and imports of data to csv files. That worked fine, until I hit a block with platforms, that are exported to yaml and are imported one line at a time. Yes, I could import 32 platforms one at a time, but I thought that as a process this would be a bad idea. Also, interfaces with ip addresses where a little complex to handle (the ip address has to exist first, then an interface is added to a device and then an ip address is assigned to the interface. All this screams for SDK in order to get the logic into the client’s hands, instead of relying on the server to handle it. I wasn’t ready at that point for dealing with the sdk (a few words on that later as I already said).
So I exported the data as the plugin doc instructed, installed the plugin in Nautobot (I have to say I was expecting menus but it’s cli) and tried but .. it crashed. A lot of data had moved in but apparently there are two issues:
- The netbox importer plugin doesn’t handle duplicates as you might expect it to and in case you have declared such duplicates in your Netbox installation for any reason (e.g. duplicate ip prefixes). This is being worked on currently.
- Sooner or later during the migration process, the dependencies on those duplicate pieces of information cause crashes. I have talked to the main developer for the plugin on the Nautobot channel in the NTC slack server and he promised to look into it.
Nautobot, although already deployed in quite a few custom cases in a few NTC customers and a lot of time in preparation by the NTC devs, is still not as much a finished and stable product as Netbox is, at least on version 2.10.4. It could also be that the docker version launches a development server, like the one you launch for the first time during the Netbox installation (you will recognize the command in the docker-compose file, it will remind you of the Test the Application section in the Netbox installation chapter). I realize that the full power of NTC is behind it and I have to admit, that does make me feel more confident following the Nautobot path, although Netbox seems more performant and stable right now. So I am sure the scales will change with time.
There is one more thing that pushed the case for Nautobot for me. It’s the jobs & reports and git data sources features. Having deployed some code already and looking to deploy more, Nautobot feels more natural for it. However this series of articles will keep with Netbox and mention Nautobot where it’s possible.
Is that all about the plugin?
I have ran the plugin for both the initial version (where I ran into some problems with duplicate ip prefixes and unnamed devices) but also for version 1.0.1, where I was finally able to migrate almost all data (but the duplicate prefixes) without issues. NTC has mentioned that they also support migration from Netbox 2.10.5, probably 2.10.6 but also upcoming version 2.11 (when it comes out). I have only tested from version 2.10.4.
Ok, enough said. Netbox install please.
Let’s examine what it takes to install Netbox. I have posted some links and videos about how to install in part1 but lets get more specific.
Install on Linux
The documentation for Netbox is pretty straight forward on this. You have to install all the necessary componets:
Edit: There was a video for installing version 2.10, published when Jeremy moved on to his Netverity venture. But Jeremy has sinced joined NS1 who are also supporting his work with Netbox. So here are two videos for Netbox install.
One is for version 2.8, made at NTC:
The second is for version 3.3, which you may find more usefull. The install described in this post is about 2.10. We migrated to Nautobot from version 2.10.4, so I can’t provide current info on Netbox.
There is no point in copying over the documentation, let me just comment on it.
The doc mentions that the installation procedure has been tested with Ubuntu 20.04. The last time I have installed Netbox on linux was on Ubuntu 18.04 (I might have tried with 16.04 the first time around). I don’t think there are surprises there. Just be careful with the password. Postgresql is a great DB, it’s a nice opportunity for you to pick up some skills.
Redis was not required when I first installed Netbox, but it was one of the surprises I had to deal with as the project evolved. It became mandatory from version 2.9 and I believe its main purpose is caching (keep that in mind when doing backup/restore, you have to give it time or restart Netbox to see what’s actually in there). All you need to be careful about there, is some text you need to generate for your config, which is covered very well in the documentation.
After those, you have to install system packages and then you get presented with two options in order to install Netbox:
- Option A: Download a Release Archive
- Option B: Clone the Git Repository
Be careful what you choose, as it affects how you do your upgrades later. I went with option B and never looked back.
The rest of the procedure is about putting in the rest of the bricks on top of each other and configuring everything. It’s pretty exciting, be very careful with the secret key. If you follow the instructions, everything should go well and fast (last time I did a native install it took about 30 mins tops, and I was being careful). In the final step for that section, you launch a dev server (they call it ‘insecure’ in an option, probably for good reason, no https there yet) to access the graphical environment (GUI) and make sure everything is ok. (Nautobot-docker for now does the same thing, not very performant but works, I may end up putting up an nginx server infront of it as https-based reverse proxy, just like what you will read later).
The rest of the setup procedures moves ahead with Gnunicorn, a Python WSGI HTTP Server for UNIX and the creation of the Netbox service. Then we move on to the Web Server installation and configuration. You are again presented with two options:
I think I considered Apache when I started out but soon discovered that most installations and guides involved Nginx and since then I had the pleasure of using and discovering that great piece of software. So Nginx it is for me! There is a small text in there referencing how to obtain an SSL Certificate with Let’s encrypt and use it to setup HTTPS.
I believe you should use HTTPS, in fact I will post my Nginx setup from my non-docker installation (modified for confidentiality reasons) in order for you to get a better example of how it can be setup. However I did not use Let’s encrypt as it demands specific circumstances that my work environment can not abide by. So I used a regular internal CA signed certificate. You can do the same with self-signed certificates (check link for a nice how-to guide by Digital Ocean) , provided you understand a thing or two about how those work. I won’t tell you anything about that, you should spend some time on your own reading about that, but let me just say that you will end up using two files, the private key and the certificate file, and those need to be declared in your http server setup and available to it in the file system so it can encrypt the communication towards the http client. One thing to remember is that self-signed certificates are not trusted by browsers and you will either need to provide an exception when served with a warning that the authenticity of the server could not be validated (of course it can’t, the CA is the server itself or the system you used for creating the certificates, the chain of trust is not supposed to work like that), or it won’t work at all, depending on what your brand of browser allows.
The documentation guides you to copy a sample config from the netbox base dir to the sites-available directory on the netbox settings file hierarchy, and then create a symlink for the file in the sites-enabled directory (that’s how you create ‘sites’ in nginx, there is usually a similar procedure for Apache :
sudo cp /opt/netbox/contrib/nginx.conf /etc/nginx/sites-available/netbox
If you follow their instructions, but instead change the listen parameter to ‘443 ssl’ and also enter the ssl certificates details, you end up with something like this:
server {
listen 443 ssl;
# CHANGE THIS TO YOUR SERVER'S NAME
server_name hostnameforyourserver.yourdomain.com;
ssl_certificate /etc/ssl/certs/thecrtfilename.crt;
ssl_certificate_key /etc/ssl/private/theprivatekeywithnopassphrase.key.pem;
client_max_body_size 25m;
location /static/ {
alias /opt/netbox/netbox/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
# Redirect HTTP traffic to HTTPS
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
A few comments on those, these are two server blocks, there is more to your nginx settings than these, but the default config file contains somewhere (usually towards the end) a command that includes config snippets from files in sites-enabled and conf.d directories, so thisone above is included in the running nginx config. The first block is the one defining the settings in order to serve content from the netbox webservice. You can see that there is a location /
block. This is where the server gets instructed to do a reverse proxy for the content served at http://127.0.0.1:8001
(the localhost address is relative to the host running nginx of course, it’s not your own computer). You may recognize that it’s at this location and port that the ‘insecure’ server was launched when you were testing your fresh netbox installation. So nginx hides that insecure connection behind an https server, using the default port 443 (which is why you don’t need to define a port when you launch https://yournetboxserver/ to get to Netbox). The two lines containing the certificates declarations point to files on the server disk, so you should have put the cert files there, and made sure they are visible for the user running the webserver.
The second server block is a redirection block (it says so right in the comments). It’s also typical, to not only serve https content but to also redirect users to the https server in case they forget themselves and accidentally type http or use just the server name in the address (which implies the same thing). In such a case the browser gets back a 301 code and the new address, redirecting the users to it.
These are enough to get your server running on HTTPS. The last part of the installation process is optional and has to do with using ldap to authenticate users, either from an LDAP server or MS Active Directory. We will talk about ldap later, let me tell you a few things first about the Netbox SDK.
What are you rambling about an SDK? What’s up with that?
An SDK (Software Development Kit) is a collection of libraries or wrapper classes for other libraries that help you hide the complexity for programmatical tasks that you do often enough so to apply the DRY principle and not rewrite and copy-paste that code again and again. It’s often used with platforms such as Netbox and Nautobot so you can use their APIs more efficiently. You could stick with the REST API for Netbox of course. It works very well, there is even online help for it, included in the server in the form of a swagger interface (swagger is a lot of help actually). The image below is from Data Knoxx’s blog post on doing just that: https://dataknox.dev/2020/11/17/getting-started-automating-netbox-with-the-rest-api/
But why deal with the REST API, if you can do complex things in a few lines of code? Usually an SDK for a platform is available as a library for a programming language (like Python, Ruby, Powershell, etc) or even a framework.
So is there an SDK for Netbox or Nautobot?
Yes. For both of them. Actually some nice people from Digital Ocean have created since some time ago, a python SDK called Pynetbox, which is ingeniously written to be somewhat agnostic on how the API endpoints are configured below (on Netbox itself), so to be able to adapt easily to changes. Jeremy was a Digital Ocean engineer himself, and Netbox was conceived during that time, so it’s natural that such interactions exist. Some of those engineers are with NTC. As Netbox was forked, pynetbox was forked too. The result is Pynautobot, a python SDK for Nautobot.
Pynetbox is a brilliant tool. When I first learned about it by doing random searches for netbox related tools, I immediately looked for documentation (also available to download). It’s important to note that developer documentation looks more like a specification to other engineers (like network engineers, such as myself) than actual doc or tutorials. It’s not meant to explain things to novice users. It’s meant to describe the available methods, data types for parameters, return values and object types for class members, etc, which means everything a programmer would need, but not enough for a network engineer trying to practice network programmability. As I was trying to understand how to use the tool, I looked for examples and guides and quickly got frustrated. I asked about that in the pynetbox channel on the NTC slack server, but the channel was pretty much dead, so I asked again on the netbox channel. Some pynetbox devs quickly responded that the doc was fine, and didn’t seem to understand my frustration and why there was a problem, which was in turn even more frustrating. One quick example for that problem is : What are the available “endpoints”? How can I apply the general methods provided by the tool to specific data categories in Netbox ? How do I launch pynetbox and contact the Netbox API?
Luckily one of the other channel members asked me if I had read Przemek Rogala‘s blog post about Netbox. Przemek is an amazing engineer and coder, he blogs regularly on https://j2live.ttl255.com/ and has recently joined the NTC team of engineers. Przemek has written a series of 4 posts about Pynetbox, which can be considered the bible for using the SDK.
- https://ttl255.com/pynetbox-netbox-python-api-client-p1-getting-info/
- https://ttl255.com/pynetbox-netbox-python-api-client-part-2-creating-objects/
- https://ttl255.com/pynetbox-netbox-python-api-client-part-3-updates-and-deletions/
- https://ttl255.com/pynetbox-netbox-python-api-client-part-4-tags-auto-assing-prefixes/
These posts were written in end of 2018, which means they are a little behind where netbox/pynetbox versions are concerned. There were some points where the examples didn’t work exactly as they were but had to be twicked, but still they are gold. I made contact with Przemek on Twitter and LinkedIn and he is as nice a guy as he is a genius python coder. In fact he is a brilliant network automation engineer. These above are the only posts that can effectively pass on the knowledge of how to leverage Pynetbox’s power to enter and manipulate data in Netbox.
As some time had passed since Przemek had written his posts and also some more passed while I was developing code with my workmate K.D. , it was inevitable that some of the calls made didn’t work as expected. Some particular field had changed name or behavior etc. We adapted quite easily by trial and error. Before I get back into story-telling mode, here is what you can do to quickly get a feel of what the power of the Netbox SDK is all about. Create a small python script like this:
import pynetbox
import ipdb
import requests
import json
requests.packages.urllib3.disable_warnings()
session = requests.Session()
session.verify = False
nb = pynetbox.api(url="https://yournetboxserver:443/", token="0123456789abcdef0123456789abcdef01234567")
nb.http_session = session
ipdb.set_trace()
Yes of course you need to install pynetbox and ipdb first using pip install pynetbox
and yes you had better used a virtual environment for this and to write the rest of your scripts, but as I am not trying to teach you python or coding, I will leave that matter in your capable hands, ok?
So if you run this, the code breaks right after the assignment of the nb object which is an instance of the pynetbox class, connected already to your netbox server (mind the issues with certificates validation and warnings, they tend to come back whenever you try APIs with code). And it sits there.. so what can you do with it? Everything! You can set values, retrieve values, filter..
What values? I don’t understand!
Yeah, neither did I when I first looked at the documentation for Pynetbox (I already said that, didn’t I?). You see there are a lot of functions there that mention ‘endpoint‘. So what’s an endpoint? Someone had to tell me that before my mind started working on its own. Yeah guys, if you are not working on that thing, it’s obvious to them and the rest of the world. There is no use in treating the rest of the world as imbeciles, they can’t be in your head when you think. Examples guys, examples!!! Thank God for Przemek, actually!
I want to be fair to the guys writing the doc (I do respect them and their work, don’t get me wrong) so I will give you the exact treatment I got. They said “But but.. it’s in the first page of the documentation!”. Now go look there and tell me if you figured it out yet. If you did, good for you. If you didn’t, let me tell you why:
THERE ARE NO EXAMPLES!
An endpoint can be almost any Netbox menu heading. Well not the heading itself. What the heading represents. It’s a title for a group of information, which corresponds to a function covered by Netbox. For example: IPAM, DCIM, etc are all endpoints (yeah, they are right, the list is right there in the first page of the documentation.. do you feel dumb yet? I did, I got angry then disappointed and then I got over it, the point is, it’s there). So if you want to see all devices you would use this assignment ( all()
is one of the functions supported by the pynetbox class for each endpoint, you can look it up in the documentation):
devices = nb.dcim.devices.all()
And if you wanted to find a device by name (let’s say that the hostname router1, you would type:
nb.dcim.devices.get(name = 'router1')
How do you display the results? You don’t need to do anything else, if you type the text above, the result is displayed on the screen, as this is the Interactive Python Debugger (IPDB) and can greatly help you figure out how any complex automation tool works with your code. Nickolas Russo has pointed that out when I was trying to figure out Nornir, and I have been grateful ever since. I am sorry not to be able to show of results but as they would be real network devices, I can’t post those in public (you know the usual cloak ‘n’ dagger stuff, “it’s classified, I could tell you but then …“).
You can probably make it a little better by using the .json() function of the nb object and the usual functions (json.dumps()
) to get a better view using indent.
all_devices=nb.dcim.devices.all()
deviceslist = all_devices.json()
print(json.dumps(deviceslist, indent=4, separators=(',', ': ')))
I think that’s all you need to know before taking a good look at Przemek’s series of posts, on how to use the SDK and take hold of your Netbox installation, using python. I will get back to my story telling now, but in case you went ahead with diving into Przemek’s content, let me tell you that he just finished a great series of posts about how to create Netbox plugins:
- Developing NetBox Plugin – Part 1 – Setup and initial build
- Developing NetBox Plugin – Part 2 – Adding web UI pages
- Developing NetBox Plugin – Part 3 – Adding search panel
- Developing NetBox Plugin – Part 4 – Small improvements
- Developing NetBox Plugin – Part 5 – Permissions and API
So why did you switch to a docker based installation?
When you read guides, step after step, and you apply everything just as it is mentioned, most of the time things go well and then your service is up. The end. Sounds great!
But it’s fiction. Most of the time in fact things don’t go well. You can start with a clean server, and hope for the best. Or you can use a Virtual Machine, and get a snapshot for each place where you think you can safely return to. If something happens, you instantly revert to that snapshot. In theory this is perfect.
But again, this is fiction. The real world doesn’t work that way. Virtual machine snapshots are not backups and do not scale well. They are based on file delta, where differences between a fixed disk file and the current state are stored. How much of that can your storage take in a real DC? I can tell you. Not much. In a world where storage systems are very expensive and come with deduplication technology so you can save space (= money in this case), using snapshots as backups will not get you very far, in fact it will only get you in dismay with your systems administrator.
So what is the answer? The answer is Infrastructure as Code. You only store the specifications. That’s just text. You can rebuild your system just like you want it in seconds or minutes. You can even extend that to a production system (sort of, it depends on scale again). Just as long as you have defined your processes for backing up and restoring your data.
What ever you choose as your infrastructure, make sure you have that covered. There is a special section called ‘Replicating Netbox‘ in the documentation. It’s a good basis for backup and restore procedures. I will thow in a little something myself later..
I don’t know.. I am not sure I like Docker..
There are alternatives of course. You could create VM templates, use them to create a clean server VM, use a script to install Ansible and then an Ansible playbook to install all your software, just as you would do it by hand. It’s doable. But it’s also cumbersome and not near as fast and optimized as using Docker containers and docker-compose. You can also pester that system admin for VM backup & restore. Better not though, he is still probably sore with all those snapshots..
Ok I am sold.
About docker, I tried doing it the other way. The longer you mess with a system, the easier it gets for decay to come in and complicate things. Maybe one upgrade will mess things up. Maybe you have edited the wrong file. Maybe a package update will break your system. There are so many things that can go wrong. At some point, although I kept around a native installation for test purposes, I switched to using netbox-docker, which is a community project, maintained by an independent team of developers.
Hey, did you forget about ldap?
No, no, coming to that (yeah totally forgot), just a few words though. The difficulty to integrate ldap based login within a tool varies greatly. The basic idea is that there is an ldap client integrated in the tool, which has to use credentials to query your ldap server (or your MS Active Directory controller that talks ldap). Your ldap server listens to a particular port using either ldap or ldaps, with the ‘s’ meaning secure, so the communication is encrypted. If encryption is used then there are things to handle. Most of the time, certificates are issued by a company internal CA, not a well known CA. In that case, you will probably need to ignore validation checks.
The next thing is what you need to query about. You have two things to take care of. First, is authentication. Your user needs to be a valid user and authenticate properly with his password. The next thing is that you may want to limit access to specific resources for specific user groups. In other words, you have to define where are your users in your ldap hierarchy and where are the groups. If you know how to do that then little things remain in order for you to make ldap work for Netbox.
Ok, tell me more about that please.
The guide for ldap (marked as “optional“) in the Netbox documentation starts with the necessary steps for installing the packages you need (ldap libraries and django modules), goes on about activating ldap support for the remote authentication backend and then moves on to configuration.
If you read it, you will see several fields being defined, such as AUTH_LDAP_SERVER_URI
which is your ldap server address and port, AUTH_LDAP_BIND_DN
which is essentially the ldap username you will need for your queries (this needs to be a valid user with querying privileges), AUTH_LDAP_BIND_PASSWORD
which is the user’s password and more. The guide is pretty thorough but not quite enough. The problem most of the times is visibility inside your own ldap architecture. You need to know where the base OU for your groups is, where your users are stored etc. Either you get hold of an LDAP admin (or an MS-AD admin) to help you with it, or you can use an ldap browser like Softera LDAP Browser or even linux command line tools such as ldapsearch (easily installed for your distribution). Well, actually you still need that admin to give you access and a few basic pieces of information but after that you can go ahead and browse your LDAP hierarchy on your own.
The following is an example of what the ldap settings are like when you need to connect to an MS Active Directory ldap instance:
import ldap
# Server URI
#AUTH_LDAP_SERVER_URI = "ldap://yourmsadcserver:3268" - that's insecure.
AUTH_LDAP_SERVER_URI = "ldaps://yourmsadcserver:636"
# The following may be needed if you are binding to Active Directory.
AUTH_LDAP_CONNECTION_OPTIONS = {
ldap.OPT_REFERRALS: 0
}
# Set the DN and password for the NetBox service account.
AUTH_LDAP_BIND_DN = "cn=ldap_msadc_user,ou=inner_special_user_ou,ou=outer_user_OU,dc=example,dc=com"
AUTH_LDAP_BIND_PASSWORD = "ldap_password"
# Include this setting if you want to ignore certificate errors. This might be needed to accept a self-signed cert.
# Note that this is a NetBox-specific setting which sets:
# ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
LDAP_IGNORE_CERT_ERRORS = True
AUTH_LDAP_USER_DN_TEMPLATE = None
from django_auth_ldap.config import LDAPSearch
# This search matches users with the sAMAccountName equal to the provided username. This is required if the user's
# username is not in their DN (Active Directory).
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=inner_regular_user_OU,ou=outer_user_OU,dc=example,dc=com",
ldap.SCOPE_SUBTREE,
"(sAMAccountName=%(user)s)")
# If a user's DN is producible from their username, we don't need to search.
#AUTH_LDAP_USER_DN_TEMPLATE = "uid=%(user)s,ou=users,dc=example,dc=com"
# You can map user attributes to Django attributes as so.
AUTH_LDAP_USER_ATTR_MAP = {
"first_name": "givenName",
"last_name": "sn",
"email": "mail"
}
from django_auth_ldap.config import LDAPSearch, NestedGroupOfNamesType
# This search ought to return all groups to which the user belongs. django_auth_ldap uses this to determine group
# hierarchy.
AUTH_LDAP_GROUP_SEARCH = LDAPSearch("dc=example,dc=com", ldap.SCOPE_SUBTREE,
"(objectClass=group)")
AUTH_LDAP_GROUP_TYPE = NestedGroupOfNamesType()
# Define a group required to login.
AUTH_LDAP_REQUIRE_GROUP = "cn=netboxusers,ou=groups,ou=outer_users_OU,dc=example,dc=com"
# Mirror LDAP group assignments.
AUTH_LDAP_MIRROR_GROUPS = False
# Define special user types using groups. Exercise great caution when assigning superuser status.
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
"is_active": "cn=netboxusers,ou=groups,ou=outer_users_OU,dc=example,dc=com",
"is_staff": "cn=netboxusers,ou=groups,ou=outer_users_OU,dc=example,dc=com",
"is_superuser": "cn=netboxadmins,ou=groups,ou=outer_users_OU,dc=example,dc=com"
}
# For more granular permissions, we can map LDAP groups to Django groups.
AUTH_LDAP_FIND_GROUP_PERMS = True
# Cache groups for one hour to reduce LDAP traffic
AUTH_LDAP_CACHE_TIMEOUT = 3600
I changed everything corresponding to internal information so it would be safe to post. The fields included in ldap_config.py are Django ldap fields and providing ldap support for Docker is similar, in the fact that these fields still need to be defined, just not the same way. It’s coming later (much later).
Install on Docker
Ok let’s start over there. Instead of me explaining what Docker is, how it works how to install it, I suggest you read these two excellent tutorials from Digital Ocean:
If you really want to know more about Docker and you know a little Greek (or a lot), you can lookup our post on Docker on our website for our community for Automation for Greek Network Engineers.
I will just go right ahead and explain how to setup netbox on docker, using the netbox-docker community project.
There are instructions on the main page in the project github repo, but if you want to be prepared make sure you read their wiki! One thing I can tell you is that if you think you can start with the regular container image and build from that, you are partly correct. You can change the settings for the web server (or you could until version 2.10.3, I will explain later) but you can’t add ldap on top. The libraries are simply not there.
I know what you are thinking, ‘why don’t you just install those?‘ but it doesn’t work exactly like that with containers. They are usually stripped down to what’s absolutely necessary, and lock down any possibilities for additional installations for security reasons. I won’t go into that now as we had enough distractions in this post, but what you need to do is to choose a different image than the regular one, using the ldap tag. Check this page to find out what the latest ldap based version is (right now it’s the one based on Netbox version 2.10.6) and then read the ldap part of their wiki. There are plenty of other useful information on that wiki, like the Monitoring section. Also, keep in mind that there is a different way to get python packages installed when your containers are created, and it’s covered in the documentation (well don’t expect me to tell you everything!).
What I didn’t like in the Wiki, was the section on TLS. What they are saying is use an external proxy. I did something different. But one step at a time.
You can start by cloning the repo. After that you have to make a few changes. Let’s start with the docker-compose.yml file
version: '3.4'
services:
netbox: &netbox
image: netboxcommunity/netbox:${VERSION-latest-ldap}
depends_on:
- postgres
- redis
- redis-cache
- netbox-worker
env_file: env/netbox.env
user: '101'
volumes:
- ./startup_scripts:/opt/netbox/startup_scripts:z,ro
- ./initializers:/opt/netbox/initializers:z,ro
- ./configuration:/etc/netbox/config:z,ro
- ./reports:/etc/netbox/reports:z,ro
- ./scripts:/etc/netbox/scripts:z,ro
- netbox-media-files:/opt/netbox/netbox/media:z
- ./docker/APIfiles:/etc/unit/APIfiles:ro
ports:
- "443:8080"
netbox-worker:
<<: *netbox
depends_on:
- redis
entrypoint:
- /opt/netbox/venv/bin/python
- /opt/netbox/netbox/manage.py
command:
- rqworker
ports: []
# postgres
postgres:
image: postgres:12-alpine
env_file: env/postgres.env
volumes:
- netbox-postgres-data:/var/lib/postgresql/data
# redis
redis:
image: redis:6-alpine
command:
- sh
- -c # this is to evaluate the $REDIS_PASSWORD from the env
- redis-server --appendonly yes --requirepass $$REDIS_PASSWORD ## $$ because of docker-compose
env_file: env/redis.env
volumes:
- netbox-redis-data:/data
redis-cache:
image: redis:6-alpine
command:
- sh
- -c # this is to evaluate the $REDIS_PASSWORD from the env
- redis-server --requirepass $$REDIS_PASSWORD ## $$ because of docker-compose
env_file: env/redis-cache.env
volumes:
netbox-media-files:
driver: local
netbox-postgres-data:
driver: local
netbox-redis-data:
driver: local
There are three differences with the original. First, I added ‘-ldap’ to the image tag. You can pretty much guess which version this will end up staging if you take a look at the list of image tags I showed you before. Right now, latest-ldap
points to 2.10.6. The second difference is the ‘outsite’ port for the netbox service. I use 443 as I want to setup https. You can leave that to 8080 if you want to test the installation first without https. Just make sure you understand what you need to do to change it later. I won’t go into that to save us some time, but what you figure out about starting, stopping, deleting and rebuilding containers and their volumes, is an asset that will be valuable to you as you continue using that technology to setup tools with Infrastructure as Code. The third difference is addition on my part of a bind volume with the files I need to switch to https (the ‘APIfiles’ directory).
You see, netbox-docker used to rely on nginx. It wasn’t obvious to me what I had to do at first to make that switch to https as I hadn’t realized basic things about docker and docker-compose. I had not realized how the structure for projects like this is built. There is code in that github repo, containing configuration files. You will find some for various components. However, if you just use that docker-compose.yml as is to build your containers, unless those files are mounted in bind volumes, they will not be used, as the image is downloaded from Docker Hub. So what is their purpose there? Well, it’s for two things. First you get to see the structure and contents. Second, if you decide to build your own images, with that dockerfile included in the repo, modified or not, you will probably end up using them. Do you get the picture? It’s like source code!
Since I was doing upgrades regularly and wanted to make sure I was using the correct configurations files, I would let them get created as intended first and then decide on how to modify them. For nginx, two (and a half) options to move ahead there.
- You can create an external configuration file in the image of the one inside the container (
docker cp
etc), edit it as you see fit and then allow it to be mounted in the location that is necessary so it will be picked up by the container, or, - You can copy it out, edit it, copy it back in. Of course, if you destroy the container and rebuild from the image, that file is gone and you need to recreate it (which is where that mounting might seem useful, until you forget yourself and let it still in there during an upgrade and mess things up).
So what’s the half option? If there is an editor like vim available inside the container, you can probably edit the file in place, either by entering the container with an interactive shell or just executing the command with the proper arguments (docker exec -it container /bin/bash
is an example of the first option, again not going into that).
So when it was still using nginx, I would go with the lazy option of copying out the nginx conf and then copying it back in once I was done. But since 2.10.4, that is no longer an option. Nginx has dissappeared from the list of services and containers and Nginx Unit is used as a web server inside the Netbox main container. Surprise!
I discussed about my confusion as to what had happened with several people on the slack channel (it’s now moved to a non NTC slack server, so make sure you look in the right place if you want to find it). After some unpleasant interactions with a wayward French janitor, I had the pleasure to exchange views with Cinmine, the lead developer of the project. Here is a summary of what they had in mind:
They want to provide a code base that is not complicated to maintain more than it needs to be, so they can concentrate on the main component. If you as a user of the prooject want to integrate it further with other things like a secure http server, you are free to do it, build your own image, have a blast. They are suggesting to put hitch in front of it (sorry but I won’t link to it, I am strange and wayward myself, I can afford to be, if you want it, search for it in their wiki, TLS section), but I prefer Nginx. If you are going to do it, do it right! Am I right? (Don’t answer).
Fronting Nginx Unit with Nginx
Fronting Nginx Unit with Nginx classic is not that difficult if you understand how the basic parts work, which is why Nginx has devoted a section for it in their Nginx Unit documentation, here. What’s missing is building a block for it inside their docker-compose.yml file, which is not that difficult. The rest you need to remember (I will say it to save you some time, but you may miss it the first time) is that inside a docker-compose ‘made’ architecture, the names of the services mean something, they can and should be used as a reference instead of the regular server address (that being the docker host in this case). This is what makes that netbox server available internally to your group of containers and isolates it from the outside world, so you can reverse proxy it with nginx. Nginx as a member container of the same virtual bridge/network will be able to reach the internal service of Netbox. Outside users will have to go through Nginx to get to it. Here is the place where that can be configured, the location / block:
server {
listen 443 ssl;
access_log off;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/theprivatekeywithnopassphrase.key.pem;
location /static/ {
alias /opt/netbox/netbox/static/;
}
location / {
proxy_pass http://netbox:8001;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
The http://netbox:8001
reference is what I was telling you about before, ‘netbox’ is the name of the service in your docker-compose file and nginx can use it to connect to the netbox server, while it remains hidden from the outside world, provide you don’t declare any ports in the netbox service section, but only in the nginx section instead. That way you can use a ‘443:443’ ports declaration there to make your netbox server available via https.
You probably recognize the rest of stuff, the port and the ssl directive, the declarations for the certificates. Of course you need to make those available to nginx with a bind mount inside the nginx service. The additional service block for docker-compose could look something like this:
# nginx
nginx:
command: nginx -c /etc/netbox-nginx/nginx.conf
image: nginx:1.19-alpine
depends_on:
- netbox
ports:
- "443:443"
volumes:
- netbox-static-files:/opt/netbox/netbox/static:ro
- netbox-nginx-config:/etc/netbox-nginx/:ro
- ./docker/thecrtfilename.crt:/etc/ssl/certs/nginx.crt:ro
- ./docker/theprivatekeywithnopassphrase.pem.key:/etc/ssl/private/nginx.key.pem:ro
The certificate files in this scenario are placed inside the docker directory under the netbox-docker folder. Before netbox version 2.10.4, the nginx service inside docker-compose.yml was similar to that.
Nginx unit with TLS
The other option is to allow nginx unit to operate with tsl. I prefer that for stand alone installations, terminate your tunnels where your webserver is (not valid in load balancing/scaling scenarios like with Kubernetes or similar). So with that in mind, please understand this: Once you launch the whole thing with docker-compose up -d, netbox will be up but you will not be able to get anything back via your browser. The problem stands with nginx unit. I have not found a way to make it launch itself in tls mode and load the certificates from file that you can put in a bind mount. It is however a REST capable web server. So it’s possible to load the certificate via a REST API call. You first need to combine the two files cert and key into one, with something like this:
cat nginx.crt nginx.pem.key > ngbundle.pem
Then put that file inside the APIfiles directory. You can then run these commands below (I have made a nice little script called load_certificate.sh
for it) to load the certificate on your nginx unit server. This needs to happen only once. If you delete your containers, and run docker-compose up -d, it needs to be done again. If not then the certificate is safely loaded and you will get a respective message as a result:
#!/bin/bash
docker exec -it netbox-docker_netbox_1 curl -X PUT --data-binary @/etc/unit/APIfiles/ngbundle.pem --unix-socket /opt/unit/unit.sock http://localhost/certificates/ngbundle
Once that is done, you need to tell the server to switch to https. That is also done via API (I made another script for that called switch_to_https.sh
):
#!/bin/bash
docker exec -it netbox-docker_netbox_1 curl -X PUT --data-binary @/etc/unit/APIfiles/ngbundle.json --unix-socket /opt/unit/unit.sock http://localhost/config/listeners
Again if it all went well you will get a nice message about the operation being succesful. If you restart your container though, you will probably need to do that again.
This is what is contained in the ngbundle.json file, used to instruct the nginx unit server to use the certificate already uploaded:
{
"*:8080" : {
"pass" : "routes",
"tls" : {
"certificate": "ngbundle"
}
}
}
Take a look here for more information: https://www.nginx.com/blog/nginx-unit-1-5-available-now/
So where does this leave us?
We have analyzed how Netbox is installed, we have talked about setting it up on docker, and how to start playing with the SDK, following a great series of posts written by a great engineer. Before I call it a day for this part 2 of the series, I will remind you the basics you need to be able to ripe the benefits of Infrastructure as Code with Netbox on Docker:
Make sure you backup your data using the procedure mentioned in the ‘Replicating Netbox‘ section of the Netbox documentation, or use the following commands adapted to netbox-docker:
#!/bin/bash
echo "exporting data from db"
#docker exec -i netbox-docker_postgres_1 pg_dump -h localhost -U netbox netbox > /netbox.sql
docker exec -i netbox-docker_postgres_1 pg_dump -h localhost -U netbox netbox > netbox.sql && docker cp netbox-docker_postgres_1:/netbox.sql /opt/netbox-docker/
echo "copying file to DR Server"
scp /opt/netbox-docker/netbox.sql root@netbox-drs-server:/opt/netbox-docker/
As you can see I am also copying the backup data to my DR Netbox server and I have put those inside a script called exportsql.sh
. If you want to do a restore or want to import data on the DR server, which is what I do, here are the commands, also included in a script called importsql.sh:
#!/bin/bash
echo "Stop nexbox container"
docker stop netbox-docker_netbox_1
echo "dropping existing db"
docker exec -i netbox-docker_postgres_1 dropdb -U netbox netbox
echo "creating new db"
docker exec -i netbox-docker_postgres_1 createdb -U netbox netbox
echo "importing data"
cat netbox.sql | docker exec -i netbox-docker_postgres_1 psql -U netbox
echo "Granting Access Rights"
docker exec -i netbox-docker_postgres_1 psql -U netbox -c 'GRANT ALL PRIVILEGES ON DATABASE netbox TO netbox;'
echo "Start nexbox container"
docker start netbox-docker_netbox_1
echo "Done importing, you can connect now"
You may want to insert a sleep command there before you launch that last message or just wait a while. Btw those are the default username and password for the postgress DB.
The way you use the SDK with the netbox-docker version is exactly as before. The only thing you need to take care of is the API key you will be using. There is default key, which is always the same when you launch the server with docker-compose up -d , unless you have arranged for it to be modified. My suggestion is to change it after you launch the server and not write it down as part of the configuration.
Is that it?
Almost, I still need to tell you how ldap works in this case. Ok I didn’t forget, it’s just that most of you won’t care for it. But I think it’s very important. It’s not that much about security. Mostly it’s about delegation. You can assign rights to different teams correspoding to LDAP/MS-AD groups. It can mean the world for maintenance and getting people to accept this great tool in your organization. Also it’s not that hard to setup once you figure it out. Your LDAP/MS-AD architecture doesn’t change often. Once you know what to setup, you know.
For netbox-docker, you can load a lot of stuff in with environment variables. If you take a look at the ldap_config.py file, you will see that those environment variables are used as values there, and that is carried inside the container. So if you define values for those in a docker-compose file in the appropriate service section, they get passed in to the right container. The place to do that is with a docker-compose.override.yml
file which will be ignored by git in case you need to pull from the remote master again. I am leaving out the values, you need to put the ones we talked about before (same Django variables).
version: '3.4'
services:
netbox:
image: netboxcommunity/netbox:${VERSION-latest-ldap}
environment:
AUTH_LDAP_SERVER_URI: ""
AUTH_LDAP_BIND_DN: ""
AUTH_LDAP_BIND_PASSWORD: ""
AUTH_LDAP_USER_SEARCH_BASEDN: ""
AUTH_LDAP_GROUP_SEARCH_BASEDN: ""
AUTH_LDAP_REQUIRE_GROUP_DN: ""
AUTH_LDAP_IS_ADMIN_DN: ""
AUTH_LDAP_IS_SUPERUSER_DN: ""
LDAP_IGNORE_CERT_ERRORS: "true"
Similarly, in order to define that ldap needs to be used as the remote auth backend, you have to put that in the extra.py
file inside the configuration folder.
# Remote authentication support
REMOTE_AUTH_ENABLED = True
REMOTE_AUTH_BACKEND = 'netbox.authentication.LDAPBackend'
Finally, if you want to make it work with Active Directory, you need to modify the ldap_config.py
file inside the ldap folder under the configuration folder. Look for the respective line around line 60, and change it to the following:
AUTH_LDAP_GROUP_TYPE = _import_group_type(environ.get('AUTH_LDAP_GROUP_TYPE', 'NestedGroupOfNamesType'))
What’s in for part 3?
In part 3 we will take a look at the actual code and methods used to enter the data in our particular story. We will also give a couple of examples with Nautobot and talk about how you can maintain that data once it’s already in there. Since a lot can happen in a few days, if there are evolvements, I will try to integrate them.
I hope to be able to get back to you soon, if you got questions, look me up on twitter under the mythryll handle.
Thanks for your patience, take care, stay safe!