#19 FabCon Poland 2026 - my takeaways from the second edition
BLOGUPDATE OVERVIEWPOWER BIFABRICAI
Sebastian Jagniątkowski
5/3/2026
FabCon (Fabric Conference) is a series of events organized by the Microsoft community and partners, dedicated to the Fabric platform and the broader analytics ecosystem. FabCon Poland is an initiative by OnexGroup, held at Microsoft's Warsaw office. It's a so-called re-delivery of announcements from the global FabCon — and since some time passes between the two events, it means that some of the features discussed have already gone live. As we all know, Microsoft ships updates regularly, so what looked like a "preview announcement" on stage in Atlanta might already be available in General Availability or Preview by the time you're reading this. The event itself focuses primarily on the latest updates and features that are just around the corner.
This is not your typical "Power BI tricks and tips" conference. It's about the direction of the entire platform. Company representatives, analytics managers, and analysts come here to find out what Microsoft is planning and where things are heading — so they can react as quickly as possible. Everyone wants to be as close to the source as possible, and on top of that, it's a great opportunity for networking and sharing real-world implementation experiences.
🔑 Power Query is a solid entry point into the data world
Power Query is doing great in the world of modern analytics — and it looks like it's only going to grow in importance. Compared to Python, SQL, or DAX, it might not seem all that impressive at first glance. But inside the Fabric ecosystem, it's playing an increasingly important role. It's becoming a common language between analysts, data engineers, and business users. Power Query is accessible enough that all three groups can use it — each at their own level of expertise.
The ETL backbone of Fabric is, of course, Dataflows Gen2 — Power Query with a turbo boost: the same familiar interface, but with Fabric's compute scale, built-in AI Transforms in natural language, and the ability to write results directly where you need them (from Lakehouse and Warehouse, to SQL Database or even Snowflake). You've got over 170 built-in connectors, support for Fabric Variable Libraries (previously, filtering was only possible via a parameter created inside PQ — that's changed now), and continuously expanding transformation capabilities.
A gem that's been around for a while but still deserves a shoutout: table from examples. You can use it to process data from a web source. Just point out a few rules (similar to "column from examples") and you get a complete set of M transformation steps — no manual configuration needed. It's a great example of dealing with a notoriously unpredictable data source: every website has different formatting, layout, and data placement. Power Query makes more advanced scraping techniques unnecessary.
If Power BI Desktop is your default environment, take a closer look at the Power Query ribbon. There's a shortcut that lets you move your entire query logic to a Dataflow Gen2 in the cloud. One click and your local ETL lands in Fabric. It requires a Fabric license, but if you have one, it's a serious time-saver when migrating logic from Desktop to the Fabric environment — and it once again highlights just how versatile Power Query really is.
🤖 The era of AI Agents is coming — but prepare your data first
This headline basically was the keynote of the entire event. One sentence that really stuck with me: "You must empower your agents with the same knowledge and context as your employees." Sounds simple, but the real-world implications are massive. Without properly prepared data and proper context, an agent will behave like an employee with zero onboarding.
And here's the real challenge: most organizations don't yet have their data in a shape that's ready to feed agents. Ontology is not just a buzzword (even if it still sounds completely alien to you). What Fabric IQ enables is essentially a shared business language — one that lets an agent understand that "przychód" in your financial system and "revenue" in your CRM are the same thing, while also explaining how your organization actually calculates those metrics. The example is painfully simple, but that's exactly how it works. Without breaking down corporate knowledge into its building blocks, the agent will improvise — and we all know how that tends to go. The internet is already full of AI content, and AI memes. 😅 It's supposed to get better from here.
Worth noting: Data Agents have reached GA. Honestly? That happened fast. You can build them using prompts, no coding required: you define the goal, instructions, knowledge sources, and expected actions. And while you can currently connect only 5 data sources to a single agent, nothing stops you from building something like a council of cooperating agents — one knows finance, another knows logistics, a third coordinates the rest. Multi-layer agent architecture is starting to make a lot of sense.
🎨 Visuals are no longer your competitive edge — the data model is
A few years ago, a polished, visually refined report was a real differentiator. Today, the bar for default settings has gone up significantly. FabCon showcased Modern Visual Defaults (currently in Preview): new reports now start with a refreshed aesthetic (Fluent 2, which I mentioned in last month's update post), sensible padding, a grey background, and subtitles enabled by default. The goal is simple — a report should look decent right out of the box, without hours of digging through the Format Pane. On top of that, Copilot can generate a pretty solid chart in seconds, and combined with the refreshed styling, it looks even better.
What does this mean in practice? You'll spend less time tweaking visuals and more time on the data model, business logic, and understanding what the data is actually trying to say (storytelling). Time saved on formatting is time you can invest in building a truly solid model. That's exactly why I increasingly think that in the age of AI Agents and a growing Copilot, pixel-pushing on reports matters less and less — because with a decent color theme, the defaults already look pretty good. That's where the title of this section comes from.
📊 Good news for planners — Lumel is coming to Fabric
I didn't mention this in my Instagram post, but for me it's one of the more interesting announcements from this FabCon. Microsoft announced the Planning in Fabric component, designed to support budgeting and forecasting processes with writeback to Fabric SQL — powered by Lumel. The name ring a bell? Lumel is a company that previously built its own standalone solution in this space, and now it's becoming part of the Fabric license.
In short: you model budgets and forecasts directly from a Power BI report, the data lands in Fabric SQL, and AI can support both historical analysis and future forecasting. A big advantage here is that the semantic layer is shared across goals, plans, and actuals — no need to split everything across multiple systems. If you deal with financial forecasts on a daily basis and have been juggling Excel, Power BI, and some ERP in separate silos, this integration will be a game changer. I also see potential use cases for Lumel's writeback capabilities (given its flexibility) on projects that have nothing to do with forecasting or budgeting at all.
✍️ Easier data input directly from the report
Here we enter the world of Translytical Task Flows, which reached GA in March 2026 (I covered this in my March update post). At the conference, a quick demo showed a user updating a project status, adding a note and a comment — all directly from within a Power BI report — with the change landing in the database and triggering a Teams notification. Zero app switching. It's as simple as that!
Technically, Translytical Task Flows (wild name, by the way 🤯) run on Fabric User Data Functions with Fabric as the write destination (SQL DB / Lakehouse / Warehouse). You also need an Input Slicer to make it all work. The setup requires a fair amount of effort and testing — this is definitely not a beginner-friendly feature — but it significantly simplifies input validation and data collection from users. Thanks to TTF, a report stops being a static view, and combined with conditional formatting, you get great options for signaling field status.
🎡 Fabric as an amusement park — and the token problem
This point was heavily emphasized at the conference, and it's worth repeating, because this framing really helps understand the mechanics inside Fabric — especially since cost is the #1 concern organizations have about the platform. So here it is: Capacity in Fabric works like a pool of tokens, where every operation (dataset refresh, AI usage, sending a query) costs CU. The pool is "use or lose it" — unused tokens don't roll over to the next day.
When tokens start running out, the first thought is usually: we need more capacity. But what came through loud and clear at the conference was this: the problem often isn't that Capacity is too small — it's that we don't know how we're consuming resources. You don't need much to hit the limits. The classic scenario: Monday morning, everyone refreshes every dashboard at once, and suddenly performance tanks and the whole system becomes unpredictable. Or someone decides to spin up AI Agents, and once they scale out of control, everything else on the Capacity slows down or stops. The only way to get predictability is isolation and dedicated resource allocation per project. The conference also covered SQL pools (applying to Warehouse and SQL Endpoints only), which can further support this process.
🔧 A few things off the main stage worth noting
These didn't make it to the main sections of this post, but they're worth at least a mention:
Observability for Data Agents is coming. We'll get a dedicated platform for monitoring agent activity — query logs, a breakdown of actions taken with the reasoning behind each decision, and who approved the final action. If you care about AI audit this is big news.
Terraform as a programmatic option for configuring analytics environments. Great news for data freaks and Infrastructure as Code fans who don't want to click through the UI when onboarding new workspaces.
Recycle Bin at the workspace level! Seriously. This is going to be a genuine life-saver. 😅 It'll cover artifacts deleted up to 90 days back (exact range configurable in Admin Portal). Sounds trivial — until you accidentally delete something critical. We'll sleep better now. 😄
Selective branching in Git — the ability to push only selected artifacts to the repo, not the entire workspace at once. Useful for anyone working with multiple projects in a single workspace. Plus, the ability to preview commit contents before pushing. Finally!
Agent output limit going from 25 to 1,000 rows — huge practical difference for tabular responses. With such a small result set, it was often hard to verify whether the returned data was actually correct.
Agents will get more flexible data source support — soon they'll be able to connect not just to Eventhouse, but also to Lakehouse and Warehouse, including mirrored data. No more over-engineering the data feeding pipeline. Currently you can connect 5 data sources to a single agent — but that may change too. For now, building more focused agents and connecting them in decision networks for specific tasks is the way to go.
✅ Wrapping up
Among all these announcements, you can read one main message between the lines: Fabric is meant to be one environment for everything — ETL, modeling, AI, write-back, and agent management. Less tool-hopping, more integration under one roof. What else stood out? The pace of change is rapid — sometimes almost startling. What was in Preview a moment ago is already hitting GA. Not every feature is a game changer, but the overall trend is clear: Microsoft is accelerating in the Business Intelligence space. So — see you at the third edition of FabCon Poland? I think so. 🤠
Feel free to check out my Instagram and LinkedIn profiles for more content. 😊
And be sure to browse my other blog posts: [LINK]
