These are the voyages.. ok enough with the Star Trek memorabilia, I was just watching Patrick Stewart on Jimmy Kimel and recently watched a couple of episodes from the TNG series, so that’s still in my mind. This post describes the way to get logs from Cisco Network Devices directly into Loki without touching a local filesystem. More in the text that follows.
Quick summary/TLDR:
- I wanted to build a solution to forward, store, filter Cisco Network Device logs in Loki and Grafana and be able to define alerts from those.
- I had issues by setting up standard syslog forwarding because of specific Cisco field formatting.
- I found a solution just for storing logs in JSON format in local storage in a Peter Czanik’s blog post for Syslog-ng
- I managed to get Promtail to process the JSON files, replace fields and forward the Cisco logs to Loki and monitor them in Grafana and get Logrotate to keep the files rotation in control.
- I transferred the whole infrastructure to Docker containers
- I went back to how logs are processed and managed to get the fields I wanted pushed in json format forwarded to Promtail‘s http push API and then over to Loki and Grafana, so no local storage used, and no dependencies to the Docker host.
- I used ChatGPT v.4 to help me understand the structure of the configuration for Syslog-ng and Grafana and get to results faster.
- Peter helped a bit in the end with troubleshooting, his help went a long way.
- There’s a small surprise waiting patient readers at the end of the blog post.
“Always start in the beginning..”
It’s been a while since I start experimenting with logging. I don’t remember exactly what sparked the interest in me. Maybe one too many times feeling frustrated when troubleshooting only to get the usual advice in the end : “Did you look at the logs?”.
At one point I became aware of the ELK stack (Elastic Search, Logstash and Kibana) and became a fan of the central log server paradigm, especially something that can help with fast searching, matching and even alerting based on certain events and triggers. It might have been since version 2.3 of ELK and I wasn’t aware of containers back then. So installing and making it work was not a simple task for me. At first I wanted to deploy it to gather logs for all my open source machines, mainly linux servers on VMs but a little later I added RPis to the pack. I still remember the day I screamed “YES!!” when the first log from a remote server made its appearance on Kibana and the cheers from some of my colleagues (nothing wrong with a little fun while putting new stuff together at work).
Later on I figured that it was not such an easy platform to manage. Nagios Log Server (NLS) made its appearance, and I got wind of it, as I had been using Nagios Core for quite a few years (more than 12 now). It was based on Elastic Search and Logstash but had a different UI, not sure what it was based on. Installation was not difficult, there was a Vmware .ovf
file and an .iso
file so installation was relatively quick and standardized, compared to starting with a linux server from scratch and setting up ELK on it. Naturally there were drawbacks, such as dealing with a Centos system that was not that easy to maintain (I had to follow Nagios Log Server update cycles), plus others.
A change of heading
After a while it became clear to me that the ELK platform was heavy on IT resources. NLS also was not exactly free to use, it was free only for up to a certain amount of logs per day. I had convinced my Head of Section to get a license for one of the instances, as it was a much better alternative than a native windows log server we used that had stopped working after a few updates, so there was an annual cost to it. Finally, it was difficult to export log data or define alerts etc.
As time passed, I became acquainted with Grafana and the TIG stack, after having observed how my now friend Jason Davis (@snmpguy) used it as a monitoring platform for the Cisco Live NOC. I was always looking for news and information about Grafana, so it didn’t take long to notice Loki.
Grafana promised Loki to be easier to setup and get acquainted (“Like Prometheus, for logs!“), easier on resources and performant enough for our needs. It also fit in extremely well with the rest of the Grafana ecosystem and supported a query language (logQL) and alerts. Like most other Grafana products, there was an OSS version I could use and setup on Docker. So everything looked good!
I have to admit though, I was not able to get everything working on the first go. There were a lot of details in the architecture and the configuration settings that I didn’t quite grasp (some I still don’t), nor was there a blog or a how-to readily available on how to setup a Loki instance on docker. I always try to build everything on Infrastructure As Code platforms so to be able to move things around easily when platforms are updated or changed, or even take the infrastructure to the cloud if that day comes.
So I asked for help in my group of Greek (or Greek speaking) Network Automation Engineers and I was given a sample docker-compose and some configuration files to begin with so I can get a system running and see how it goes. It was a v.1.x instance I believe. I took a liking to it almost immediately.
“ok.. what are you selling..”
Nothing. I don’t want to waste your time, Loki is great and this is not a promotional blog post (I don’t do those at all). I will put up more Loki links in the final paragraph for this blog post. I have already set it up to watch over my linux machines using Syslog-ng for the log collection, or Promtail for windows hosts. The Syslog-ng instances forward logs in syslog format to Promtail on the central log server and then Promtail forwards logs to Loki. Logs are then displayed on Grafana with Loki as a source, usually with a log entry list view and a log rate view (graph). Labels are detected in the log stream, and then those labels can be used, if necessary, to create variables in order to adapt log filters chosen by the user in the dashboards. You can use regex in the variable values, e.g. for the host variable, in order to separate the dashboards per group of hosts/network devices.
“Too fast, I thing you lost me there..”
I guess that merits another blog post but I am not sure I should be the one to do it. I will think about it at a later time. For now here is a way to cover your the linux server logs to Loki architecture, by none other than the creator of Syslog-ng, Peter Czanik himself : Sending logs from syslog-ng to Grafana Loki , Grafana, Loki, syslog-ng: jump-starting a new logging stack . Of course that’s not at all a detailed blog list. I hope to complete this list soon.
“Fine, move on”
Fast forward to now. I needed an alternative solution for collecting logs from Cisco Network Devices. So far I had set up a couple of Loki stack instances on Docker:
- One Loki instance for collecting logs for a specific subset of hosts (windows) that were part of an app secure ecosystem. In this case, the Loki instance replaced the NLS I had setup with a license. Works great, Promtail is the log collector installed on Windows.
- A second Loki instance for collecting logs from firewalls for use with security administrators. This instance also works fine but I intend to make it better by using permissions per dashboard, alerts, etc.
The case for Cisco network devices seemed simple. It’s just syslog right? Well, there were problems.
Cisco network device logs used a specific structure that can lead to confusion if you approach it like a linux host. In the log stream, when using a syslog protocol, it’s possible to detect some fields, with some more common than others. Maybe the most important is the $HOST field. This is related to the identity of the host producing the logs. Some Cisco devices fill this field with confusing values. Those values are integers, perhaps consistent per device, but there would be no easy way to set a specific 1-1 relationship between those integers and the device ip address (which is normally contained in the $HOST field) in order to look up the host address. So it would be ideal if those logs could be manipulated early on to display the right value in the $HOST field. As it happens there is another field named $HOST_FROM that does always contain the right value. So that value needed to be used instead.
I also had another problem. Logs were not produced with the right timestamps. That was not a log processing problem, as it turns out. But I didn’t know that at the time.
So I did two things.
- I searched the web for references for those problems, starting with the time problem (nothing original here, but always a good place to start).
- I asked ChatGPT about it.
The first thing ended up leading to a false path, but it did take me further. A Reddit post claimed that Syslog-ng does not recognize the correct time format for Cisco Logs and that Rsyslog would work correctly for it instead. That wasn’t at all accurate, as I learned after I decided to try it out. However it’s very important to configure your devices with correct timezone and use of timestamps.
I was trying things, checking my output but had little feedback as to why that didn’t work. So I kept searching, until I found a blog post from Peter Czanik, describing how one can collect Cisco Network Device logs using Syslog-ng, by exporting the logs in JSON format to disk, while using a specific parser to get the logs in a more convenient form. Here is the blog post: Parsing Cisco logs in syslog-ng. That was a great way to get logs in local storage in JSON format, which was nice but not exactly what I wanted.
I resorted to ChatGPT’s help for two reasons. First, I needed a way to combat the rising confusion that threatened to drown me for what was true or not about where my problem was, and second I thought that since with Peter’s way the logs were formatted in JSON, that it could potentially be easier to handle them programmatically or systemically through a log processor, such as Promtail.
“So you are an AI fan then? Another one.. “
Not exactly, I would not say that I am a fan. But I am not shutting my eyes or burying my head in the sand either. Things will evolve. We have a choice to acknowledge this or ignore it. I am not going to ignore it.
I was introduced to ChatGPT at the end of Fall 2022, where it was announced. In its free version (3.5) I found that it could be frustrating when trying to solve IT problems that needed knowledge of recent developments in a field, which was to be expected. It also made mistakes or would crash after 3-4 responses when challenged with its mistakes.
After GPTv4 launched, a fellow engineer communicated that he had subscribed to the service and was very happy with it, so I decided to give it another try. I have since used it in another couple of projects as well, and I have to say I agree with my friend. This version handles problems much better, adapts well when asked to modify or add requirements, keeps the context much better and doesn’t crash. I found that even when it doesn’t hit the target at the center, I learn from its well structured responses, gaining faster insight in how things work in areas I had no previous contact with. It also provides explanations or examples and will answer to clarification questions. Did I mention it also speaks Greek? Not important in this case but nice.
So it doesn’t qualify as an IT magician, pulling rabbits out of a hat, but helps a lot in speeding up projects, provided you have some idea of what you are doing yourself, or are learning at least as you go along. Which is a lot really! The only negative point I found was that it would not challenge me when I myself was wrong. It apologizes and recedes, which can lead to you taking a wrong turn and get lost in your project.
Of course I never thought to ask it about political parties, who will win the Super-ball and which stocks to buy. I leave those fields to others.
“So how was that any help?”
In this case, I explained the situation and asked it to provide a way to process the JSON formatted logs with Promtail, so that those logs could be sent over to Loki. My initial approach was to try and get Syslog-ng to provide those logs on the fly in JSON and sent over to Promtail with http. It did give it a shot, but again I could not make it work and had no idea why I couldn’t.
In the Promtail documentation I found a passage that stated “Currently, Promtail can tail logs from two sources: local log files and the systemd journal“. That was conflicting with ChatGPT’s proposal, so I challenged it with it. Of course, ChatGPT can not access the internet and its training base goes back to 2021, as it can easily tell everyone who asks. So I just pasted the text in the chat and it immediately apologized for the confusion (it usually does that when you point out a mistake or a wrong direction) and drove me to the path of getting Promtail to process the files stored locally in JSON format.
It wasn’t what I wanted, but I tried that direction. It helped me understand the pipeline stages supported by Promtail to process logs before sending them on their way. It offers a way to detect and exchange labels to transform logs in the format you need, so that they can be easily filtered and displayed using Loki and Grafana. If ChatGPT hadn’t broken down the structure so well in its responses, accompanying the config code with detailed explanations and even suggesting more things to explore, it would have taken me a lot longer to understand it well enough to advance to the next step on my own.
So I took advantage of its proposal. We went a few rounds on a wild goose chase, building an Ubuntu container with syslog-ng and promtail in the same container first. I didn’t like that idea. I wanted separate containers. Again, I ran into problems making it work.
So I challenged it again. It suggested installing the services on a virtual machine to be able to monitor exactly what is not working. I followed that suggestion as well. It felt like working with a partner. We were not agreeing on the plan, but taking each other’s input into consideration none the less, and moving along with the next idea or proposal.
After a few days of pause, I found a mistake in the Promtail config. Again, the problem was not that I had chosen a docker based installation, but installing the services natively on Linux helped gain time in locating what the problem was.
“Finally some results!”
Finally, I had success in getting logs from the devices in JSON format, store them on disk, getting Promtail to read the files and process the logs to transform them in the format I needed and then sent over to Loki to be displayed on Grafana. I had kept the Loki and Grafana parts of the stack on Docker. Syslog-ng and Promtail were running as native services on Ubuntu 22.04.2 LTS.
I made a few experiments on my own with moving the native services into docker containers that would get access to the local storage log directory and was successful. So I had a solution which was entirely based on docker for getting the logs in Loki, but I didn’t like it 100%. I had maintained two separate instances, one with the native services and one with docker based Syslog-ng and Promtail. So I explored a couple of more things.
- If I was to store logs in the filesystem before they were processed and shipped to Loki, I had to find out how to rotate them so that the server would not run out of space. The solution was very easy to find and configure: Logrotate. Logrotate works on logs stored in directories. When you install Syslog-ng on an Ubuntu Linux server there are thing configured for logrotate that cover the dirs and files defined by Syslog-ng. Adding in the ones you want is easy. I found this post, but if you still don’t understand how to do it, you can always ask ChatGTP. I am kidding, you can ask it, but you can also read this and this, to get a good enough understanding on how it can be used in a native installation. Basically, you create a list of folders and then define a block of settings in a {} config block.
- I would need to configure log retention inside Loki. I had neglected that in the past and that turned out to be a disaster as I did run out of space once and had to ditch the volumes to get it back up again (yes dashboards and all the variables and regex settings had to be ditched as well). Not so difficult on its own but things do add up and are easily forgotten. You can find out on how to do that here. I haven’t done that yet, I will probably use the compactor for that. Be careful, the default mode is to not delete the logs. They will grow and grow until disk space is depleted. Don’t neglect this for too long.
I tried using Logrotate on the native services installation and that went very well. But if I wanted to make it work for the Docker based install, I had to do more research on it, as the folder where the logs where kept was a bind volume on the docker compose services, not an internal log folder on a linux server. I went back to ChatGPT in the same chat session about the logs (it keeps the sessions you made) and explained my progress to it. I then asked it for advice on the log rotation. It did suggest a solution for it. Here it is (blacklabelops/logrotate docker image, you can find others as well). You can pass the same settings to it using environment variables (I figured that part on my own using the documentation, it’s very well written). That worked as well.
So I finally had a complete solution with log rotation. I have create a github repository with all the files you need to set this up: https://github.com/itheodoridis/cisco-logs-to-loki-local-storage
Regarding to the syslog-ng config for this, in the case of a native service you could just add a file in the /etc/syslog-ng/conf.d directory, so it would be loaded in addition to the main config. In our case with docker, it’s one file for all (except for the scl.conf file which doesn’t contain much) so everything is included. Here are the important parts:
@version: 3.38
@include "scl.conf"
source s_net {
default-network-drivers(flags(store-raw-message));
};
template t_jsonfile {
template("$(format-json --scope rfc5424 --scope dot-nv-pairs
--rekey .* --shift 1 --scope nv-pairs --key ISODATE)\n\n");
};
destination d_fromcisco {
file("/var/log/fromcisco.log" template(t_jsonfile));
};
log {
source(s_regular);
destination(d_other);
};
log {
source(s_net);
destination(d_fromcisco);
};
This is what the Promtail config is for that.
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
pipeline_stages:
- json:
expressions:
host: HOST_FROM
priority: PRIORITY
cisco_severity: cisco.severity
message: MESSAGE
timestamp: ISODATE
- timestamp:
format: RFC3339
source: timestamp
- labels:
cisco_severity:
host:
priority:
- output:
source: message
static_configs:
- targets:
- localhost
labels:
__path__: /var/log/fromcisco.log
job: syslog
I haven’t touched the Loki config but you do need one for Loki to work. It will become relevant when the time comes for you to define log retention. You can find it in the repo.
The Grafana config, while huge as a file, relies heavily on defaults so most of it is commented out. The essential info is pretty much passed through the environment variables in the docker-compose.yml file. However I do update three things: The ldap, email and alerting. Also the ldap.toml file includes everything you need to authenticate to an AD using LDAPs and even define which AD groups will be used to assign specific roles in Grafana.
#################################### Auth LDAP ##########################
[auth.ldap]
enabled = true
config_file = /etc/grafana/ldap.toml
allow_sign_up = true
#################################### SMTP / Emailing ##########################
[smtp]
enabled = true
host = smtp-server.domain.com:25
user = """domainname\username"""
password = """domainuserpassword"""
skip_verify = true
from_address = username@domain.com
from_name = Grafana
#################################### Alerting ############################
[alerting]
enabled = true
execute_alerts = true
Of course you can skip using LDAP for authentication and use local users or skip the smtp config so you will not be able to send email alerts for example (you can still define other types of contacts like MS-Teams using webhooks and no, I won’t explain how in this blog post).
“Had enough? Too bad, there’s more..”
As I said already, I wasn’t happy with this yet. I did put it in production, announced its availability to some of my colleagues and then went back to look deeper into how I could move logs directly from syslog-ng to Promtail using http, without going through the docker host local storage as that was an unpleasant dependency that would hinder moving those services to a different platform in the future. Two things happened that moved me in that direction.
- While troubleshooting the problematic log format, in one of its attempts to offer a better way, ChatGPT actually suggested a python script to transform the logs in a custom way with Syslog-ng. I had no idea that was possible. Of course I might have known that, if only I had followed the Syslog-ng walkthrough that Peter had released over LinkedIn recently or if I had read some of his presentations I later discovered. But I didn’t have the time then and had no idea about what content was available or the Python direction. I will go back to it soon. Read/watch it. Peter is amazing.
- In the Grafana Loki slack channel I had asked if Promtail was able to receive logs over http. Finally someone answered named @liguozhong by showing me the code reference that proved it’s possible (efficient as proof but not very useful to me though) and Tom Donohue, a Grafana engineer verified it by suggesting a part of the documentation that introduced the Promtail push API, a way receive logs in http, possibly from another Promtail instance, so they can then be forwarded to Loki, in the format that Loki expects it. I did have more questions on the matter (which ports, which format etc). Another member added up a few questions and offered his own experience and answers @Sai Gowtham Rajanala. He was the one who confirmed the ports and the API path, so I am thankful for that. Here is his LinkedIn profile.
With that info, I went back to ChatGPT more motivated than before. It was obvious that ChatGPT was correct to suggest direct http transport to Loki through promtail. But it was not that obvious in that context and the rest of the suggestions were also hard for me to grasp. Perhaps it was a little agnostic of what it had suggested itself. It had taken the config used in Peter Czanic’s blog post and constructed an http message in a specific format that should be sent to Promtail or Loki.
Again ChatGPT had suggested something that could propel things forward, but was not exactly on target as the proposal for modified config for Syslog-ng and Promtail. Here is what it suggested:
source s_net {
default-network-drivers(flags(store-raw-message));
};
template t_loki {
template("{" "streams": [ { "stream": { "host": "${HOST}", "source": "syslog" }, "values": [ [ "${UNIXTIME}000000", "$(format-json --scope rfc5424 --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs --key ISODATE)" ] ] } ] "}");
};
destination d_http {
http(
url("http://127.0.0.1:3500/loki/api/v1/push")
method("POST")
headers("Content-Type: application/json")
user_agent("syslog-ng User Agent")
body-suffix("\n")
body(t_loki)
);
};
log {
source(s_net);
destination(d_http);
};
Hint: it didn’t work and restarting syslog-ng produced a failure, that could not be explained by any other data. There were no logs, syslog-ng IS the log service. So to get to the bottom of this, I attempted another approach. I had already reached out to Peter some time ago, having discovered more parts of this work in other cases, such as getting logs over to Elastic Search using http. I asked on Twitter for people who could understand Syslog-ng config syntax, as I could not understand where the mistake was. @NeilHanlon responded and tagged Peter (@PCzanik). I explained the issue and where I was on my progress and then displayed the syslog-ng config.
Peter was quick to spot the problem. “The http destination does not support a reference to a template“. ChatGPT was wrong to suggest that. Peter also suggested to use a file destination to export the logs with the suggested format so that it can be easier to inspect and troubleshoot. I followed both of those leads (ChatGPT had proposed network capture instead with sudo tcpdump -i lo -w capture.pcap port 3500
, I never got to that, the human here was more practical).
First, I went back to ChatGPT, informed it of its mistake and suggested that perhaps the config needed some adjustment. I kept trying the suggestions but had similar results. Sometimes the Syslog-ng service would not accept the config change. Other times the logs were getting stuck at the promtail stage as it was rejecting the http messages with http code 400 (Bad request). After trial and error, surprise surprise, again it suggested using a python script! But first it did respond to my request for troubleshooting the Promtail push API, by providing a curl command to test it.
It was successful and it was the first time that I had received a log in Loki through http coming from the Promtail push API! Here is the curl command:
curl -X POST -H "Content-Type: application/json" -d '{
"streams": [
{
"stream": {
"host": "test-host",
"source": "syslog"
},
"values": [
[ "<unix-timestamp-ns>", "<log-line>" ]
]
}
]
}' http://localhost:3500/loki/api/v1/push
Secondly, I tried Peter’s suggestion to export the result of the processed logs in file. It turned out that my suspicion was correct: The processed output resulted to badly formatted JSON towards the Promtail push API, as there were too many double quotes in series for the output to be considered correctly formatted JSON data. The template command that exported the log data in JSON format produced double quoted data inside double quotes. All of ChatGPT’s suggestion for escaping the special characters to produce clean JSON, failed. Only the Python script it suggested worked, and the logs went through. Of course the part of the message with the now correct JSON structure, was ignored by Loki, due to the structure of the expected format.
Again ChatGPT had followed the same pattern. Right direction, wrong outcome. Like if it was missing the insight what was behind the next hill.
As the formatted data was ignored by Loki, I decided to drop it entirely and focus on what original fields I could extract from the log, in order to inject them in the hand-made JSON structure in the ‘stream’ key, as labels. That gave me all I had originally intended to get from the Cisco devices in Loki, and then some, such as the ‘facility’ field. For other fields, I wasn’t so lucky, such as the ‘mneumonic’ field. Perhaps the locally stored JSON data is of some use for specific use cases. Personally I intend to stick with the http transport through the Promtail push API, using the fields already available in the stream. At least for now.
Final Configs
The final configs are contained in this repo: https://github.com/itheodoridis/cisco-logs-to-loki-http-push-api . The main differences are the following:
- No local storage is used in the docker host for logs, so no such dependency. Therefore, no logrotate container. I think that no host dependency is a great advantage for IaC.
- Syslog-ng constructs a JSON structure in the http destination, using the fields detected in the log stream.
${HOST_FROM}
is used instead of${HOST}
. The “host” used as a log destination is essentially the Promtail service defined in the docker-compose.yml. I left the syslog-ng config to forward the internal logs for the syslog-ng container to a different log server, just like in the previous setup. You can remove that part if you don’t want it, up to you. Docker does store logs for each container, you can get those forwarded through the host one way or another. There is a mention for the structure that Loki expects, here. - Promtail now activates the http push api in the scrape configs section, by defining a job. That is what is described in this page in the Grafana site.
- There are no differences for Loki and Grafana config.
- If you want to activate syslog-ng in the docker host (Ubuntu Server 22.04.2 LTS in my case), you can install it as a package, and even get it to forward logs outside to an external central log server (another Loki!). Just don’t have it launch a log source listening to the network at 514/udp.
“Can I have a quick look?”
Again, the important parts are below. Remember to refer to the github repo for full configs.
Syslog-ng
source s_net {
default-network-drivers(flags(store-raw-message));
};
destination d_http {
http(
url("http://promtail:3500/loki/api/v1/push")
method("POST")
headers("Content-Type: application/json")
user_agent("syslog-ng User Agent")
body-suffix("\n")
body('{ "streams": [ { "stream": { "host": "${HOST_FROM}", "source": "syslog", "severity": "${SEVERITY}", "priority":"${PRIORITY}", "facility": "${FACILITY}" }, "values": [ [ "${USEC}", "${MESSAGE}", "${ISODATE}" ] ] } ] }')
);
};
log {
source(s_net);
destination(d_http);
};
Promtail
scrape_configs:
- job_name: push1
loki_push_api:
server:
http_listen_port: 3500
grpc_listen_port: 3600
labels:
pushserver: push1
job: syslog
pipeline_stages:
- json:
expressions:
host: HOST
priority: PRIORITY
severity: severity
message: MESSAGE
timestamp: ISODATE
- timestamp:
format: RFC3339
source: timestamp
- labels:
severity:
host:
priority:
- output:
source: message
Anything missing?
Plenty. I have not yet deployed log retention for Loki. I will, very soon. I am starting here: https://grafana.com/docs/loki/latest/operations/storage/retention/
I have not deployed alerting for logs. I will try that too very soon, in theory it’s very simple, you go through Grafana. I have a webinar recording lined up for that, I will watch it and try things out. I already have some experience in Grafana alerting. Take a look here.
I have not included the Cisco config commands to declare a logging host. It’s not that difficult to find on your own, just take care to define all that is needed for the logs to reach the logging host (vrf, source interface, etc) and also timestamps and timezone (I used local timezone after conferring with my colleagues, it did make a difference, no, I won’t say more on that).
I also have not included a how to setup Loki in Grafana in this post. Do you need that? There are so many blog posts and videos, even on the Grafana site. Check the next section for a couple of links, talk to me on Twitter if you need a part 2 for a few quick hints. I have to repeat, I am not a logging expert, I am a tinkerer mostly without the fear to try something new and make a fool of myself. When I succeed, I share my findings with everyone to help them gain insight and time. If we all learn more, better for everyone, yes?
Finally, after interacting with Peter Czanik, he did accept (thank you Peter!) to get a discussion going about creating an http destination for Promtail, targeting the Promtail http push api to get logs into Loki. The discussion is ongoing here: https://github.com/syslog-ng/syslog-ng/discussions/4454 . I hope it leads to something useful for more people.
Useful Links
- https://www.syslog-ng.com/community/b/blog/posts/sending-logs-from-syslog-ng-to-grafana-loki
- https://www.syslog-ng.com/community/b/blog/posts/parsing-cisco-logs-in-syslog-ng
- https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-101-part-1-introduction
- https://www.syslog-ng.com/technical-documents/list/syslog-ng-open-source-edition/3.37
- https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.37/administration-guide/40#TOPIC-1829058
- https://grafana.com/docs/loki/latest/
- https://grafana.com/go/webinar/getting-started-with-logging-and-grafana-loki/?tech=target&pg=docs-loki&plcmt=related
- https://grafana.com/docs/loki/latest/clients/promtail/
- https://grafana.com/blog/2023/04/13/grafana-alerting-searching-for-grafana-alerts-just-got-faster-easier-and-more-accurate/
- https://grafana.com/blog/2022/06/14/grafana-alerting-explore-our-latest-updates-in-grafana-9/
- https://grafana.com/go/webinar/building-grafana-dashboards-emea/ – Webinar on May 17th 2023.
- https://openai.com/blog/chatgpt
That’s it!
Yes, this is the end, thanks for reading so far, sorry again that this was this long, I hope it’s useful for you. I haven’t found another implementation documented anywhere so far, so maybe this can give you a way to get your logs together for Cisco Network Devices.