Graham McLeod

Leveraging Assets

An asset was traditionally something you own which had value or which you could use to derive value. An example of the former would be cash or gold. An example of the latter would be an item of equipment.
We can update this in two important ways:

  1. Assets can be virtual or digital, so we could have something like a skill, a design, a patent or a recording

  2. We don’t necessarily have to own them to derive value from them

Some of the fastest growing and valuable companies do not own the assets they leverage. Uber does not own cars; AirBnB does not own accommodation; YouTube does not own the content it serves.

Virtual assets, such as a design, can be very valuable. We can profit from royalties, copyright, trademarks etc. without necessarily ourselves making the product or delivering the service. Consider the inventor of the crown bottle cap, William Painter, whose company received a royalty on every cap used for several decades!

Digital assets are also profitable. A music track is recorded once, but can be listed on thousands of websites virtually for free, then duplicated, again virtually for free and shipped to consumers, again almost for free. This can occur millions of times, generating substantial revenues while not parting with the original asset.

The best though, is using someone else’s assets to deliver value. Uber, for example, uses the assets of owners (cars), the assets of drivers (skill and time), the global infrastructure of the Internet (funded by advertising, corporates and governments) and the asset of the user (cell phone) to deliver a valuable and desirable service.

In doing business and architecture planning, it is useful to contemplate Asset Leverage.

First list assets. Look for things that you own, things that you know, things that you know how to do. Try to find things in the categories of physical (e.g. property, stock, equipment); monetary (cash, investments, shares, bonds etc.); knowledge/designs/patents (e.g. books, recordings, designs, models); virtual (e.g. skills, customer goodwill) and digital (e.g. recordings, images).

Next think about assets you do not own that you can leverage. Examples include those of Partners (e.g. supplier knowledge, skill, equipment, stock); Customers (e.g. premises, network, computing, cell phone, time); Investors (expertise, connections); Infrastructure (e.g. Internet, public facilities); other Owners of something you need (e.g. Accommodation, Cars, Images, Location data).

Figure out to what extent you are currently leveraging the assets. Look for opportunities to leverage them to a greater extent. A great deal of value can be unlocked this way. You can find the best opportunities by looking for those assets that can generate a lot of value that you can access with relatively little effort or expense.  

#businessarchitecture #strategy #businessanalysis #digitaltransformation #assets

Stumbling towards AGI (Artificial General Intelligence)

Elegant Architecture overcomes limited and messy implementation?

A new article discusses Hugging GPT which uses Chat GPT as a human interface and executive controlling module to control tasks to complete a goal. The tasks are delegated to specialist AI models that perform narrow functions well. The video discusses the ideas and is a great introduction to the paper.

Better Search: Will ChatGPT (or similar) displace Google?

Google has become indispensable in our work and personal lives. Finding products and services, checking out reviews, finding the cheapest supplier and doing professional research. Google has built a $150Bn advertising revenue business on top of that ubiquity.

The ChatGPT large language model from OpenAI burst on the scene recently, attracting over a million users in a week. It boasts a conversational interface accepting complex queries in natural language, a human language response and allows to refine our search, seek more detail or pursue other aspects easily. This style of interaction is extremely attractive - it’s a bit like having a hugely knowledgable human expert on tap to instantly understand our questions and answer in an accessible paragraph or two. It raises the question “Is this the future of search”? Fuel has been added to the fire of this debate with the investment by Microsoft of a further $10Bn in OpenAI. Remember that Microsoft has long promoted Bing in competition with Google search.

But not so fast… The results from ChatGPT are not always accurate. It is based upon a predictive model which has ingested huge amounts of data from the Internet and document sources. Because it is a mathematical model based on probability, it will favour average and mainstream opinions from its training set. It can be prompted to produce factually incorrect answers which are stated very convincingly as facts. Annoying if the recipient already knows the facts, but dangerous or misleading if the recipient does not. The model is also trained on this corpus of data at a given time, in a “batch” mode. So it may not reflect information recently published, or which has been updated since the last training cycle.

OpenAI and others wanting to promote these kinds of systems for search will have to find ways to improve accuracy and currency of the underlying models and provide caveats to users about potential bias and inaccuracy. Meanwhile, Google, which itself has significant AI systems and probably the best, biggest data sources to train them on, can easily add a conversational interface.

To date, ChatGPT has been offered for free use (to gain experience, publicise capabilities and refine the models), but this is likely to change very soon. OpenAI does not yet have in place an advertising supported model like Google and is likely to first try subscriptions. But when it is no longer free, other competitors will spring up.

One smaller but interesting player is looking to offer the best of both worlds, starting now. This is Andi ( Andi search lets you use GPT style prompts and provides a summary answer (much like ChatGPT), but also provides references and search results on the right to allow validation or further exploration. This is very promising! It should be an exciting time in search this year.

Dealing with Change

We probably all feel a little battered by the levels of change we are experiencing. Technology, pandemic, business models, social mores, ethics, sustainability, legislation and more. It is hard to retain our sense of perspective and balance and self worth when everything seems to be shifting around us!

As architects we are often the agents of change for the organisation, processes, products, systems and technology. But that does not mean we ourselves are always that happy with change! The threat is that it brings risk: Are we focussing on the right things? Are there new factors we aren’t aware of? Is our “known good solution” still relevant?

I find comfort in Jeff Bezos approach which advocates:

“Find what is not going to change and optimise for that”

He recommended, in the case of Amazon, that the following factors were unlikely to change:

  • Customers want cheaper prices

  • Customers want fast delivery

  • Customers want increased selection

And in the case of Amazon Web Services:

  • Customers want reliability

  • Customers want low prices

  • Customers want rapid innovation in adding APIs (increasing utility of the platform)

Find the things in your business / industry that will not change and optimise for them. 

Architecture as a Context for Agility

Agility requires doing focussed things rapidly. The more you know going in, the better decisions you can make quickly. The more you document what you learn, the more knowledge is available for future efforts. Good agile work fills in more of the picture thereby enabling all teams.

The more of the picture is filled in the more we can avoid wasted effort, align our efforts and deliver with less risk. You can’t create the full picture quickly, which is why many agilists avoid architecture.
But you can start with a “paint by numbers” reference model/ontology, which gives you the framework into which to rapidly record your growing knowledge and which indexes where to look for information for your next effort, and what touches the squares you want to colour, so you know how to be informed and compatible.

Every project (agile or otherwise) should:

  • Be informed by our knowledge of current architecture assets and challenges

  • Contribute to an improvement in assets, condition, effectiveness and future readiness

  • Improve the architecture of the portfolio

  • Deliver business value

  • Fill in more of the architecture “big picture” to inform future projects

The environment should:

  • Have a conherent integrative meta model/shared concepts

  • Encourage good work through well conceived principles

  • Have standards for how things get recorded (artefacts) so they are meaningful and sharable

  • Provide a collaborative repository that holds things and makes them findable

#agile #project #architecture #context

If data is the lifeblood, how’s your heart?

Organisations are paying more attention to data management, often driven by compliance, privacy or cyber security concerns. But simply holding data doesn’t generate value.

We need a thorough understanding of the relationships between data (numbers, text, pictures, audio, video, facts), information (data meaningful to humans: salary, sales, order, invoice, fingerprint etc), knowledge (richly connected data: contextual data, trend data, inference) and wisdom (deep insights, experience shaped). Value increases as we move up this hierarchy. Alongside that, if we are to understand what we have, manage it properly, secure it, use it, integrate it etc., we need meta data: data about data. Where is it from? How is it structured? Who owns it? How much can we trust it? How is it derived? What format is it in? Where do we keep it? How long should we hold it? What are the constraints on its use…

All of the above are complicated by the explosion of data brought on by new forms of data; technology capabilities to capture, store, manipulate, communicate, generate, represent and analyse data and innovative applications. Virtualisation of products and services compounds the problem, as more of what we offer and sell to customers is information rather than physical.

There are more opportunities than ever before to profit from data, information, knowledge and its proper use. But there are also more challenges associated with doing it properly, successfully, reliably and securely. All of these rely upon skills and capabilities. Specifically, we need high skills to understand, analyse, model, design and implement data related products and services. This is the realm of Data and Information Architecture.

Architects also need to understand business requirements, facilitate communication and build consensus, define vision, bridge gaps and scope initiatives. They need to guide projects and solution designers. Crucially, they need to connect the business/conceptual view of data with the logical (application) and physical (database and technology) views. They need to devise, apply and encourage use of good principles to evolve the data and information landscape in positive ways.

Data management is ultimately a business responsibility, but can be assisted by many technical skills, including: maturity assessment, modelling, meta data management, technology architecture, risk analysis, integration design and considerations of security and privacy.
A comprehensive data/information architecture and data management capability is vital to deliver business benefits as well as ensure security, privacy and acceptable risk.

These are all topics covered in depth in our Techniques and Deliverables of Information Architecture intensive online live course from the 7-11th November. See details here.

#dataarchitecture #informationarchitecture #digitaltransformation #bigdata #businessintelligence #bi #datamodelling

What comprises a “Solution Architecture”?

A solution is a combination of components in a configuration that solves a problem or exploits an opportunity in a way that meets our goals. Hopefully it is also effective, efficient, sustainable, ethical and relatively risk free.

It is not just a software system, but rather a combination of software, process, people skills, data and technology that meets business, human, technical and legal requirements.

Considering the provided rich picture:

The items outside the circle represent the context in which a solution is developed. We ignore these at our peril. If we do not know the Business Goals for a solution, we can only meet them by extraordinary luck. If we do not know the Legal constraints we will run foul of the law. If we do not understand the Customer and the Stakeholders, we are unlikely to provide something they are happy with. If the service is not delivered via the required Channels or compatible with the Brand strategy, we may miss the mark entirely. In short, many of these should inform our Requirements. Alan Kay famously remarked: Context is worth 80 IQ Points.

Our requirements should also certainly include the Functions that must be performed, the Business Objects (Data) that is used, the Technology we need, the Process to deliver Value, the Application Services we may use or provide, the User Interfaces, the Events we need to respond to or generate and the Locations where we need to be available.

Non-Functional requirements will also play a major role in the viability and success of a solution. These include aspects such as security, reliability, performance, cost, maintainability, flexibility, ease of use, compatibility and many other factors.

Customer Experience is crucial to ensure wide and willing acceptance and delivery of business value. Staff experience is also key, as it that of other professionals who will deal with the solution, including Operations, Support and Maintenance staff.

The Solution Architecture should follow some important principles, including: Modularity, loose coupling, message based communication and open standards. Its also good to have tests built in and automatically repeatable, an affinity for DevOps or Continuous Deployment. User interfaces that are intuitive and built in tutorial aids are really important too.

A cost effective system might be composed of off the shelf components, reusable library elements, configurable components and custom developed elements. The solution includes these and the other elements of human skill and capability, supporting technology, infrastructure, documentation etc.

We may also need to contemplate the development/ implementation dependencies and partition the solution into an initial Minimum Viable Product plus one or more incremental delivery releases to get us to the full capability required.

Solution Architecture is a challenging but very rewarding role.

#SolutionArchitecture #Architecture #Requirements

Just one API?

GraphQL provides single query for queries and updates

Jargon buster at the bottom of post.

First came RPC to call a function across a network. But it was language specific and lacked standard facilities. So DCE was made to address common requirements, such as directory services, time, authentication and remote files. But it was not object oriented when Smalltalk, C++, Java et al arrived. So Microsoft devised DCOM to provide distributed services for Ms languages while others backed CORBA which provided cross platform and cross language services. Both required agreement for message formats ahead of time.

Enter Web Services, leveraging XML to serialise data, WSDL to describe services, UDDI to publish, find and bind to them, and SOAP to message remote objects. Great! We could now find, bind to and invoke services without prior design agreement. But, it was not very efficient and required a lot of plumbing on each end, and quite a bit of knowledge from developers.

So, Roy Fielding devised REST exploiting HTTP to provide a simple way of working with remote Resources. REST allows us to simply access remote servers and retrieve something GET, inform about something POST, store something PUT, update something PATCH or delete something DELETE. This is achieved by creating simple headers and a request line including the URL and parameters. Post also has a body.

REST is very light weight and does not need much infrastructure. Combining it with JSON made it very easy to use from within web pages and mobile applications and it quickly took off.

But there was a problem. Each REST request would get a specific thing from the server. If there is a rich database or knowledge graph on the server, we can create many REST APIs: At least one for each kind of domain object (e.g Customer, Product, Account, Invoice etc. ); Often more than one to cater for different application requirements (partial records, related records etc. ). Plus we will have different APIs to query, to store, to update etc. So, a server with a database managing a score of domain concepts could quickly require 100s of APIs. Ew, that’s a lot of development, testing, deployment, documentation, maintenance…

Facebook ran into this problem at scale. Their solution was a query language that would live in the server as a single entry point and receive a query request as a parameter. This is not dissimilar to the way a relational database receives dynamic SQL requests. Now the tailoring of a response can happen in the server (more efficient) and we have only one API endpoint to maintain. Voila. So that solved the problem for Facebook… Fortunately, they published it as GraphQL which allows writing query and update (mutate) statements and having these fulfilled by a suitable GraphQL processor / application / database on the server. Initially, these were discrete, but they are starting to be embedded in database systems, especially Graph Databases. One good example is DGraph.


You can also find good explanations of most of these topics on Wikipedia

  • RPC - Remote Procedure Calls

  • DCE - Distributed Computing Environment

  • API - Application Programming Interface. A way of requesting a service or function contained in another piece of software. Most commonly used today to refer to a REST API

  • COM+ - Microsoft Component Object Model. An architecture that allowed sharing of objects between Microsoft languages.

  • DCOM - Microsoft distributed COM. Similar to COM+, but allowing objects to be remote

  • .Net - Microsoft Component model and framework that succeeded COM and DCOM

  • CORBA - Common Object Request Broker Architecture. An architecture for distributed object messaging across languages and technologies.

  • Web Services - A set of standards, leveraging XML, that allows requesting services across the Internet. Includes WSDL, UDDI, SOAP.

  • XML - eXtensible Markup Language. A standard for encoding data onto text with specific tagging of the meaning of the values.

  • WSDL - Web Services Description Language. An XML document describing a Web Service.

  • SOAP - Simple Object Access Protocol. A way to invoke a (remote) service in the Web Services approach. Effectively an XML message requesting a given service and expecting an XML response message.

  • UDDI - Universal Description Discovery and Integration. A protocol for publishing Web Service Descriptions and for finding these.

  • HTTP - Hypertext Transport Protocol. The protocol of the internet which allows hyper-linking.

  • REST - Resource State Transfer protocol. A protocol that leverages the HTTP intrinsic functions to support requesting services across the Internet with minimal other infrastructure.

  • JSON - Javascript Object Notation. A way of encoding JavaScript data structures on to text for transmission or sharing. Similar purpose to XML, but lighter weight.

  • GraphQL - A query language used on a client and interpreted in a server which allows easy retrieval of data using graph concepts (Nodes, properties and relationships).

  • RDF - Resource Description Framework. A standard for defining facts and knowledge using simple statements with a Subject, Predicate, Object format. Part of Semantic Web standards.

  • DGraph - a Property Graph database that supports graph schemas, RDF, JSON and GraphQL natively at web scale. Also does ACID transactions.

  • ACID - Atomic, Consistent, Isolated, Durable. Desirable attributes of transactions in a database.

#API #Services #REST #WebServices #SolutionArchitecture #Design #GraphQL

Lasting Impact of the Little Language that Could: Smalltalk turns 50

Late 60’s and early 70’s Xero was a major player in the office automation space. Innovative work on user interfaces was happening at Rand Corporation(JOSS, Tablets, GRAIL), Stanford Research Institute (Doug Englebart, Personal Interactive Computing, Mouse etc) and MIT/Lincoln Laboratories (Ivan Sutherland / Sketchpad). Xerox gave Alan Kay and his team at their Palo Alto Research Centre (PARC) free reign to explore human computer interaction. Alan had worked on ARPANET and did a PhD on the FLEX machine, a precursor to a truly personal computer. He conceived the “Dynabook” which conceptually defined a tablet (think iPad, but easier for the user to program and tailor) in 1968!

Amazing things came out of PARC, including:

  • Object oriented programming for UI and general purposes

  • Smalltalk (still one of the best, purest, easiest to learn and productive general purpose languages available today)

  • Keyword syntax facilitating domain / application specific languages

  • Just in Time Compilation (JIT) and Virtual machine execution of bytecodes allowing systems to be ported easily across hardware

  • Integrated Development Environment (IDE ) with introspection

  • Bitmapped displays with graphics and fonts

  • Image storing state of system allowing easy and instant persist/restore and continuation of work

  • Model View Controller (MVC) paradigm for separation of domain model, business logic and user interface

  • Windows, Icons, Mouse and Pointer (WIMP) paradigm with overlapping, resizeable windows and the whole Graphical User Interface

  • Text, Image and Document editing with What you See is What you Get (WYSIWYG)

  • Laser printing

  • Ethernet

We owe these pioneers a major debt of gratitude! Subsequent developments include:

  • GUIs at Apple (licensed from Xerox) then Ms Windows (Imitated)

  • Objective C, the major systems language at Apple (Smalltalk ideas and class libraries on top of C) - the precursor of Swift

  • Object oriented databases

  • Office suites - Charles Simonyi did Bravo at PARC on the Alto system, the first WYSIWYG document editing system. He later spent 20+ yrs at Microsoft and created Word and Excel

  • OO in general, Smalltalk being a major influence on Java, Javascript, Ruby, Eiffel, Dart and many other languages. It is a direct ancestor of Squeak, Pharo, Amber and Newspeak

  • eXtreme & Pair programming (Kent Beck, Ward Cunningham) and aspects of Agile Development

  • Live programming/ debugging

  • Test Driven Development (SUnit, Kent Beck)

  • Agile Visualisation (Roassal)

  • Moldable Tools (Tudor Girba, GTools)

  • EToys and Scratch visual programming for children

I saw Smalltalk ideas in the 1981 Byte article, got hands on and seduced in 1991, and we have used it ever since in our products and tools. Capers-Jones 2017 research confirms Smalltalk still offers a 2-3x productivity improvement over mainstream languages. Vive la difference!

Application Portfolio - Deriving Value from the Asset

The application portfolio in mature organisations represents a very significant investment over an extended period. Expenditure can easily run into hundreds of millions or even billions. This can be a major asset to leverage to produce value, or a major problem that consumes resources and funds.

Managers, architects and analysts often don’t know where to begin or where to focus to improve value delivered and the contribution of the portfolio to strategic goals. Two things that can help are Taxonomy and a Landscape Health Assessment.

Taxonomy is a common architecture technique where we use a set of categories (usually capability or functional) to organise the baseline applications so that we can detect redundancy (multiple things doing the same job), gaps (where we do not have something or the current solution is not adequate) and opportunities (where there is something useful but it is not widely deployed; or there is an easy “off the shelf” solution for a gap). A good starting Taxonomy can often be obtained as an industry or domain reference model.

A Maturity Model is less common. In fact, several years ago we were doing a consulting assignment and looked in vain for a “maturity model” or “portfolio review model” for the application landscape. In the end we created one which we have used since. I recently revised this to include guidance based upon findings (as we have for other models including our Data Management Maturity Model) and we have automated it on our Maturity Model Platform, in turn based upon our Enterprise Value Architect tooling.

This provides a quick, guided, automated way to move from little knowledge to a robust view on the application portfolio status; scores on several important health dimensions; and recommendations of actions to improve the health of the portfolio and value delivery to the business. The instrument also looks at strategic issues and leveraging technology. For a limited time, you can try it free. You can save results, retrieve them in future and compare them over time or scenarios.

Take the Application Landscape Assessment

We welcome feedback and further questions.

If you are keen to build Application and Solution Architecture Skills, consider our Techniques and Deliverables of Application and Solution Architecture online live 5 day course (31 Oct - 4 Nov).

You can read the full details or enrol here.