Quantcast
Channel: Yet Another Tridion Blog
Viewing all 215 articles
Browse latest View live

A DD4T.net Implementation - Taxonomy Performance Issues

$
0
0

Retrieving Classified Items for Multiple Keywords

My intention when retrieving the classified items for a taxonomy is to resolve the entire taxonomy (i.e. all its keywords) first, and then cache it. This way, retrieving the related content for a given keyword would be very fast. I described this approach for DD4T Java in post Retrieve Classified Items for Multiple Keywords.

There is one issue with that approach in .NET -- it is not standard API and one must write their own Java Hibernate code to retrieve the classified items. In DD4T Java that is not an issue, but in DD4T .NET, exposing the custom Java logic to the .NET CLR is not easy. It would involve writing some JNI proxy classes to bridge the two virtual machines. I just don't feel like writing that code.

Enter the second best approach -- resolving classified items on-the-fly, for one keyword at the time, on demand, as described in post Taxonomy Factory.

Retrieving Large Taxonomies

Another performance issue is to retrieve large taxonomies. The API call to read entire taxonomies is TaxonomyFactory.GetTaxonomyKeywords(taxonomyUri). This is the only method on the Tridion.ContentDelivery.Taxonomies.TaxonomyFactory class that will retrieve a root Keyword with all its Parent/Child keyword properties resolved, so the taxonomy is fully navigable up and down.

However, the method above is a bottleneck for large (really large) taxonomies. Internally the method reads all keywords in the taxonomy and all their custom meta objects. This can take a significant hit on performance.

My solution for this problem was to use a discovery algorithm that would read one keyword in the taxonomy and resolve its parent keywords up to the root. Resolving means reading its custom meta and the items directly classified against it. The root keyword would be cached, together with all its discovered child keywords.

When a new keyword would be requested, first the algorithm tries to read it from the cached root keyword (as one of its possible children). If that search didn't find the keyword, we assume the keyword was not resolved yet, and the discovery process would start once again, and it would attach the new keyword at its appropriate level in the taxonomy.

Slowly and on-demand, the taxonomy structure would be created and it would consist only of the keywords that were requested. The following code shows this algorithm. Note the usage of TaxonomyFactory.GetTaxonomyKeyword() method that returns a partially resolved keyword.

usingdd4t = DD4T.ContentModel;
usingtridion = Tridion.ContentDelivery.Taxonomies;

public IMyKeyword ResolveKeywordLazy(dd4t.IKeyword keyword)
{
IMyKeyword result;

if (keyword == null)
{
returnnull;
}

if (keyword is IMyKeyword)
{
result = (IMyKeyword)keyword;
}
else
{
IMyKeyword root;
string key = GetKey(keyword.TaxonomyId);
CacheWrapper.TryGet(key, out root);
if (root == null)
{
result = ResolveKeywordLazyRecursive(keyword, out root);
CacheWrapper.Insert(key, root, cacheMinutes);
}
else
{
result = ResolveKeywordLazyRecursive(root, keyword);
}
}
return result;
}

private IMyKeyword ResolveKeywordLazyRecursive(IMyKeyword root, dd4t.IKeyword keyword)
{
if (keyword == null)
{
returnnull;
}

IMyKeyword result = GetKeywordByUri(root, keyword.Id);
if (result == null)
{
tridion.Keyword tridionKeyword = taxonomyFactory.GetTaxonomyKeyword(keyword.Id);
result = new TaxonomyConverter().ConvertToDD4T(tridionKeyword);
IMyKeyword parent = ResolveKeywordLazyRecursive(root, result.ParentKeyword);

if (parent != null)
{
result.ParentKeywords.Clear();
result.ParentKeywords.Add(parent);
parent.ChildKeywords.Add(result);
}
}

return result;
}

private IMyKeyword ResolveKeywordLazyRecursive(dd4t.IKeyword keyword, out IMyKeyword root)
{
if (keyword == null)
{
root = null;
returnnull;
}

tridion.Keyword tridionKeyword = taxonomyFactory.GetTaxonomyKeyword(keyword.Id);
IMyKeyword result = new TaxonomyConverter().ConvertToDD4T(tridionKeyword);
IMyKeyword parent = ResolveKeywordLazyRecursive(result.ParentKeyword, out root);

if (parent == null)
{
root = result;
}
else
{
result.ParentKeywords.Clear();
result.ParentKeywords.Add(parent);
parent.ChildKeywords.Add(result);
}

return result;
}



A DD4T.net Implementation - Navigation

$
0
0
Most of the websites I have worked on already existed before upgrading them to DD4T. This means the navigation has already been implemented, usually as means of publishing an XML file from Tridion (most always called navigation.xml ;-) ), and then rendered dynamically at request time by means of XSLT transformation.

This setup makes it quite easy to implement navigation in DD4T. I always try to reuse what I have, and the existing navigation.xml is a perfect candidate. We can reuse the Tridion template and Template Building Block that generates the navigation without any modification.

The generated navigation.xml can be even published to file-system, so other parts of the website can still use it. Alternatively, we can store it in the Content Delivery DB.

The approach for generating navigation in DD4T relies in creating an object model from the navigation XML by deserialization, then caching it for faster performance. Everything is wrapped inside a singleton factory wired through Ninject (or whatever other dependency injection framework you happen to use), and voilà -- you've got navigation.

Object Model

First we need a model to hold the navigation objects. We have the navigation XML, which usually represents a structure of very similar nested nodes.

<root>
<navuri="tcm:1-2-4"title="Root"url="/default.aspx">
<navuri="tcm:1-3-4"title="Products"url="/products/default.aspx">
<navuri="tcm:1-4-64"title="Product 1"url="/products/product-1.aspx"/>
<navuri="tcm:1-5-64"title="Product 2"url="/products/product-2.aspx"/>

We need to create model classes with properties for each attribute in the XML. Luckily, Visual Studio has already a tool that does that automatically -- xsd.exe.

C:\Temp>xsd navigation.xml
Microsoft (R) Xml Schemas/DataTypes support utility
[Microsoft (R) .NET Framework, Version 4.0.30319.33440]
Copyright (C) Microsoft Corporation. All rights reserved.
Writing file 'C:\Temp\navigation.xsd'.

C:\Temp>xsd navigation.xsd /classes
Microsoft (R) Xml Schemas/DataTypes support utility
[Microsoft (R) .NET Framework, Version 4.0.30319.33440]
Copyright (C) Microsoft Corporation. All rights reserved.
Writing file 'C:\Temp\navigation.cs'.

The execution of the xsd.exe program above generates model class navigation.cs, which after some beautification, looks something like this:

usingSystem.Xml.Serialization;

publicpartialclassnav
{
private nav[] nav1Field;
privatestring uriField;
privatestring titleField;
privatestring urlField;

[XmlElementAttribute("nav")]
public nav[] nav1
{
get { returnthis.nav1Field; }
set { this.nav1Field = value; }
}

[XmlAttributeAttribute()]
publicstring uri
{
get { returnthis.uriField; }
set { this.uriField = value; }
}

[XmlAttributeAttribute()]
publicstring title
{
get { returnthis.titleField; }
set { this.titleField = value; }
}

[XmlAttributeAttribute()]
publicstring url
{
get { returnthis.urlField; }
set { this.urlField = value; }
}
}

And after some more beautification, the code looks like this (note that I added property ParentItem and marked it ignorable while deserialization -- this will hold a reference to the parent navigation item):

usingSystem.Xml.Serialization;

[XmlRoot(Namespace = "", ElementName = "root", IsNullable = false)]
publicpartialclassNavigation
{
[XmlElement("nav")]
public NavigationItem[] Items { get; set; }
}

[XmlRoot(Namespace = "", IsNullable = false)]
publicpartialclassNavigationItem
{
[XmlElement("nav")]
public NavigationItem[] ChildItems { get; set; }

[XmlIgnore]
public NavigationItem ParentItem { get; set; }

[XmlAttribute("uri")]
publicstring Uri { get; set; }

[XmlAttribute("title")]
publicstring Title { get; set; }

[XmlAttribute("url")]
publicstring Url { get; set; }
}

Deserialization

Next, we need to deserialize the XML into our model class. The method below performs the deserialization of an XML navigation file. Similar code could be used to deserialize a String representing navigation XML coming from the database.

private T ParseXml<T>(string filePath) where T : class
{
try
{
using (XmlReader reader = XmlReader.Create(filePath,
newXmlReaderSettings() { ConformanceLevel = ConformanceLevel.Document }))
{
returnnewXmlSerializer(typeof(T)).Deserialize(reader) as T;
}
}
catch (IOException ioe)
{
LOG.Error("Can't read navigation file " + filePath, ioe);
returnnull;
}
}

The ParseXml method is to be called from the NavigationFactory, but more about that in a follow-up post.

    Navigation navigation = ParseXML<Navigation>(navigationFilePath);

Check out the follow-up post Navigation (part 2) for a showcase of the NavigationFactory and a way of creating a fully-linked navigation object model.


A DD4T.net Implementation - Navigation (part 2)

$
0
0
In previous post, Navigation, I presented the foundation for deserializing the navigation XML into an object model. In this post, I'll continue presenting the NavigationFactory class and the integration with caching.

The NavigationFactory class is a singleton wired through dependency injection (such as Ninject) that uses caching to serve navigation object model for a given Publication.

[Inject]
publicvirtual ICacheWrapper CacheWrapper { get; set; }

public Navigation GetNavigation()
{
string publicationUrl = UriHelper.GetPublicationUrl(HttpContext.Current);
string key = GetKey(publicationUrl);

Navigation navigation;
object cacheElement;
if (!CacheWrapper.TryGet(key, out cacheElement))
{
string virtualPath = publicationUrl + "system/navigation.xml";
string filePath = HttpContext.Current.Request.MapPath(virtualPath);
navigation = ParseXml<Navigation>(filePath);
SetParent(navigation);

if (navigation == null)
{
CacheWrapper.Insert(key, false, 1);
}
else
{
CacheWrapper.Insert(key, navigation, 60);
        }
}
else
{
navigation = cacheElement as Navigation;
}

return navigation;
}

Notice the following:
  • The returned navigation object model depends on the Publication identified for the current request. They method GetPublicationUrl from class UriHelper returns the first level folder of the URL path, which helps us identify the Publication;
  • Method ParseXml was presented in my previous post, mentioned above, and is in charge with deserializing the XML into an object model;
  • A null Navigation object is cached for one minute, just to make performance better, in case the navigation XML is somehow missing or not deserializeable; otherwise, the navigation object is cached for one hour;
  • For simplicity sake of this example, the code uses hard-coded values for cache times and the XML navigation path is "/system/navigation.xml" under the PublicationUrl;
  • Method SetParent is used to create the reference from child navigation item to its parent. Initially, the navigation XML contains only information about parent navigation item to its children, but in practice we also need the capability to traverse the navigation model upward;

privatevoidSetParent(Navigation navigation)
{
if (navigation == null)
{
return;
}

foreach (NavigationItem item in navigation.Items)
{
SetParent(item);
}
}

privatevoidSetParent(NavigationItem item)
{
if (item == null || item.ChildItems == null)
{
return;
}

foreach (NavigationItem child in item.ChildItems)
{
child.ParentItem = item;
SetParent(child);
}
}

A useful method is GetNavigation(Navigation object) which gives us an array of NavigationItem objects situated under the 'root' Structure Group in the navigation model.

public NavigationItem[] GetNavigation(Navigation navigation)
{
if (navigation == null || navigation.Items.IsNullOrEmpty())
{
returnnew NavigationItem[0];
}

NavigationItem[] childItems = navigation.Items[0].ChildItems;

return childItems == null ? new NavigationItem[0] : childItems;
}

The code above is wired in Ninject using the following construct:

Bind<INavigationFactory>().To<NavigationFactory>().InSingletonScope();

Also, client code make use of the navigation using the following constructs:

INavigationFactory factory = DependencyResolver.Current.GetService<INavigationFactory>();
Navigation navigation = factory.GetNavigation();
NavigationItem[] items = factory.GetNavigation(navigation);


A DD4T.net Implementation - Navigation (part 3)

$
0
0
Check out the previous posts about the NavigationFactory: Navigation and Navigation (part 2). I presented a way to deserialize the navigation XML into an object model and then serve it through a factory class that uses caching.

In this post, I will present a simple way of locating a certain node in the navigation object model. Locating a NavigationItem is done by comparing it in some way to a given identifier. The most natural way is to retrieve an item by its TcmUri.

In order to do so, I defined the following method on the INavigationFactory interface:

    NavigationItem GetItemById(Navigation navigation, string tcmUri);

The implementation of this method traverses the Navigation object passed as parameter and applies a Comparator to it, hoping to identify the given tcmUri.

The method calls helper method FindItem that performs the recursive traversal of the Navigation object model.

public NavigationItem GetItemById(Navigation navigation, string pageUri)
{
if (navigation == null)
{
returnnull;
}

IComparator comparator = new UriComparator() { Uri = pageUri };
foreach (NavigationItem item in navigation.Items)
{
NavigationItem result = FindItem(item, comparator);
if (result != null)
{
return result;
}
}

returnnull;
}

The UriComparator class is an implementation of the custom interface IComparator that compares the given Uri property to the TcmUri property of the NavigationItem we are currently testing.

privateinterface IComparator
{
boolCompare(NavigationItem item);
}

privateclassUriComparator : IComparator
{
publicstring Uri { get; set; }

publicboolCompare(NavigationItem item)
{
return Uri != null&& item != null&& Uri.Equals(item.TcmUri);
}
}

Finally, the FindItem method calls the actual comparator's Compare method and performs the depth-first recursive traversal of the NavigationItem passed to it.

private NavigationItem FindItem(NavigationItem item, IComparator comparator)
{
if (item == null)
{
returnnull;
}

if (comparator.Compare(item))
{
return item;
}

if (item.ChildItems != null)
{
foreach (NavigationItem child in item.ChildItems)
{
NavigationItem result = FindItem(child, comparator);
if (result != null)
{
return result;
}
}
}

returnnull;
}

As an example, the calling code uses the NavigationFactory.GetItemById() in the following way:

INavigationFactory navigationFactory =
DependencyResolver.Current.GetService<INavigationFactory>();
NavigationItem item = navigationFactory.GetItemById(Navigation, "tcm:1-2-64");


A DD4T.net Implementation - Navigtion Utils

$
0
0
In the previous posts Navigation and Navigation (part 2), I presented the NavigationFactory class and the object model obtained by deserializing the navigation XML.

In this post, I will present simple examples of creating breadcrumb and navigation structures.

Breadcrumb

When rendering a breadcrumb trail, we start by locating the NavigationItem in the navigation object model corresponding to the current page, and then we follow the "ParentItem" relation to go up until the navigation root. We retain each NavigationItem that we encounter in this traversal. Finally, the breadcrumb trail is the reversed list of these NavigationItems.

The following code is part of the NavigationFactory and returns an array of NavigationItem objects corresponding to the breadcrumb trail.

public NavigationItem[] GetBreadcrumb(NavigationItem page)
{
IList<NavigationItem> result = new List<NavigationItem>();
if (page != null)
{
do
{
result.Add(page);
page = page.ParentItem;
}
while (page != null);
}

return result.Reverse().ToArray();
}

The method GetBreadcrumb expects a NavigationItem object as parameter, which means we first need to get the NavigationItem for the current IPage. The method below builds such a NavigationItem array for the given IPage. The code checks whether the current page is in the Navigation, or otherwise whether its Structure Group is in the navigation object model.

private List<NavigationItem> GetBreadcrumbForPage(IPage page)
{
NavigationItem item = NavigationFactory.GetItemById(Navigation, page.Id) ??
NavigationFactory.GetItemById(Navigation, page.StructureGroup.Id);

returnGetBreadcrumbForNavigationItem(item);
}

Once the NavigationItem corresponding to the current IPage has been identified, we call method GetBreadcrumbForNavigationItem which builds the actual breadcrumb. Notice that we discard the first element in the returned array, because this corresponds to the Root Structure Group.

Additionally, the code checks whether the last element in the breadcrumb is a default page (e.g. default.aspx), and if so, it removes it.

private List<NavigationItem> GetBreadcrumbForNavigationItem(NavigationItem item)
{
List<NavigationItem> result = new List<NavigationItem>(NavigationFactory.GetBreadcrumb(item));
if (result.Count > 0)
{
result.RemoveAt(0);

NavigationItem lastItem = result.LastOrDefault();
if (lastItem != null&& UriHelper.IsDefaultPage(lastItem.ActualUrl))
{
result.Remove(lastItem);
}
}

return result;
}

Navigation

Rendering navigation implies getting the Navigation object model and traversing it top-bottom in order to generate the HTML structures for each level.

The following code exists in a Razor template and calls a partial view passing to it the NavigationItem array of the first level items in the navigation:

    @Html.Partial("_PartialNavigation", Html.GetNavigations((IPage)Model))

The method GetNavigations is defined as extension method for HtmlHelper and it is in charge with reading the navigation items directly under the root Structure Group:

publicstatic IList<NavigationItem> GetNavigations(this HtmlHelper htmlHelper, IPage page)
{
if (page == null)
{
returnnew NavigationItem[0];
}
INavigationFactory navigationFactory = DependencyResolver.Current.GetService<INavigationFactory>();
Navigation navigation = navigationFactory.GetNavigation();
returnnew List<NavigationItem>(navigationFactory.GetNavigation(navigation));
}

The Razor view presented below belongs to file _PartialNavigation.cshtml and it shows some sample code that generates a two-level deep HTML navigation:

@model IList<NavigationItem>

<ul>
@foreach (NavigationItem levelOne in Model)
{
<li>
<ahref="@levelOne.Url">@levelOne.Title</a>
<ul>
@foreach (NavigationItem levelTwo in levelOne.ChildItems)
{
<li>
<ahref="@levelTwo.Url">@levelTwo.Title</a>
</li>
}
</ul>
</li>
}
</ul>


A DD4T.net Implementation - Pagination (Models)

$
0
0
This post adds pagination capabilities to an AJAX Dynamic Component Presentation (DCP) presented in an earlier post Rendering Only a Partial for AJAX.

The use case is the following: we have a controller that displays a list of DCPs in a paginated way. Due to the dynamic nature of this list, the list of DCPs should be rendered as a partial using AJAX. This implies that an actual AJAX request is made to the controller, which triggers only the list of DCPs be recalculated and served back. The paginated AJAX URL produces HTML directly, therefore the Javascript logic can simply replace the container div 'innerHTML' with the markup received from pagination. A whole page reload is thus not necessary. Additionally the list of DCPs can be sorted before being paginated, but about that in a later post.

This post descries the server-side model builder that generated the paginated list of models. In a subsequent post I will present the server-side views that render the same and the client-side Javascript performing the AJAX handling.

In RouteConfig.cs, I modified the partial AJAX route, by adding the lines highlighted below. They basically accept a new URL path parameter 'page' of type integer:

    routes.MapRoute(
name: "DCP",
url: "{controller}/{action}/{publication}/{component}/{view}/{partial}/{page}.ajax",
defaults: new
{
partial = UrlParameter.Optional,
page = UrlParameter.Optional
},
constraints: new
{
controller = @"\w{1,50}",
action = @"\w{1,50}",
publication = @"\d{1,3}",
component = @"\d{1,6}",
view = @"\w{1,50}",
partial = @"_[\w\.]{1,50}",
page = @"\d{0,3}"
}
);

An example URL for calling this partial paginated URL would be:

http://my.server.com/Device/Index/123/456/FullDetail/_PartialDocument/2.ajax

The URL above calls controller Device, action method Index on Component with id tcm:123-456 and uses view FullDetail to retrieve the DD4T strong model. The controller dispatches to view _PartialDocument.cshtml and displays page 2 of the list of documents that the controller built.

The models that make all this possible and their builders are as follows. We start with a very simple BaseBuilder class that holds a generic type 'ModelType' representing the actual model we build from. The class is abstract because it simply defines the signature of a Build() method to be implemented in one of the specific subclasses of it:

publicabstractclassBuilderBase<ModelType>
{
public ModelType Model { get; protectedset; }

publicBuilderBase(ModelType model)
{
Model = model;
}

publicabstractvoidBuild();
}

Next, we define base class AjaxBase, another generic that provides some common properties for AJAX calls, such as controller name, view, partialView, partialUrl:

publicabstractclassAjaxBase<ModelType>
    : BuilderBase<ModelType> where ModelType : ModelBase
{
publicstring Controller { get; privateset; }
publicbool IsPostback { get; protectedset; }
publicstring View { get; privateset; }
publicstring PartialView { get; privateset; }
publicstring PartialUrl
{
get
{
Uri uri = HttpContext.Current.Request.Url;
TcmUri modelId = new TcmUri(Model.Id);

returnstring.Format("http://{0}{1}/{2}/Index/{3}/{4}/{5}/{6}",
uri.Host,
uri.Port == 80 ? string.Empty : string.Format(":{0}", uri.Port),
Controller,
modelId.PublicationId,
modelId.ItemId,
View,
PartialView);
}
}

publicAjaxBase(ModelType model, string controller,
string view, string partialView)
: base(model)
{
Controller = controller;
View = view;
PartialView = partialView;
}
}

Next, we define the PaginationBase class. In order not to reinvent the wheel, this class makes use of the package PagedList, which "makes it easier for .Net developers to write paging code", and "allows you to take any IEnumerable(T) and by specifying the page size and desired page index, select only a subset of that list". The class defines properties such as PageNumber and PageSize that will be used later on to determine which sub-set of the paged list to display. The class also helps us construct the paged partial URL that we call from client-side Javascript.

publicabstractclassPaginationBase<ModelType, PagedType> : AjaxBase<ModelType>
where ModelType : ModelBase
where PagedType : ModelBase
{
public IPagedList<PagedType> PagedItems { get; protectedset; }
publicint PageNumber { get; set; }
publicint PageSize { get { return10; } } // just an example

publicPaginationBase(ModelType model, string controller, string view,
string partialView, int pageNumber)
: base(model, controller, view, partialView)
{
PageNumber = Math.Max(1, pageNumber);
}

publicstringGetPagerUrl(int pageNumber)
{
returnstring.Format("{0}/{1}.ajax", PartialUrl, pageNumber);
}

protectedvoidToPagedList(IEnumerable<PagedType> items)
{
PagedItems = items.ToPagedList(PageNumber, PageSize);
}
}

Next, the actual implementation of the paginated model is class DocumentPartial. This class defines the Build() method that creates the list of Document object and then paginates it. Notice the use of IsPostback property -- this implies the method Build is only invoked during a postback from AJAX, otherwise this model partial does not actually build the paginated list.

publicclassDocumentPartial : PaginationBase<Device, Document>
{
publicDocumentPartial(Device device, int pageNumber)
: base(device, "Device", "Device", "_PartialDocument", pageNumber)
{ }

publicoverridevoidBuild()
{
IsPostback = true;
IEnumerable<Document> documents = // fetch the documents...
ToPagedList(documents);
}
}

Finally, the calling code sits inside the controller. In our case, the Device controller, action Index has the following simplified logic. The DocumentPartial class is used to build the list of Document only if the partial variable is actually specified and has value '_PartialDocument'; otherwise, the DocumentPartial is created, but not built. The controller either dispatches directly to the _PartialDocument partial view with the corresponding paginated list of Document, or it attaches the empty DocumentPartial to the ViewBag for later processing during the Device view.

publicclassDeviceController
{
public ActionResult Index(int? publication, int? component,
string view, stringpartial, int? page)
{
Device device;
string viewName;
ResolveModelAndView<Device>(publication, component, view, out device,
out viewName);

if (partial == null || partial == "_PartialDocument")
{
var documentPartial = new DocumentPartial(device, page ?? 1);
if (partial == null)
{
ViewBag.DocumentPartial = documentPartial;
}
else
{
documentPartial.Build();
returnView(partial, documentPartial);
}
}

returnView(viewName, device);
}
}

The mechanism above is quite flexible and can be used in many situations time and time again, by simply providing implementation classes for the abstract PaginationBase and wiring them up in the Component Controller.

The pagination, ajax, or base models are quite versatile and can be used together or individually depending on the requirement to display simple embedded lists of DCPs, or AJAX lists, or paginated AJAX lists, or any combination thereof.

In a later post, I will tackle the server-side view logic.


A DD4T.net Implementation - Pagination (Views)

$
0
0
In previous post Pagination (Models), I presented a way of modeling the server-side code to display dynamic lists of Dynamic Component Presentations (DCP) using AJAX.

In this post, I present the server-side view logic in context of MVC .net architecture and using Razor CSHTML views.

As presented in the previous post, the Controller creates a model class that contains the paginated list, and it then attaches this model to the ViewBag. In the example it was DeviceController, that created model DocumentPartial which contained a paginated list of documents.

We now need to display this list of paginated items in a Razor view. In order to make the logic reusable and succinct, I implemented the paginated view as partial view. The Component view corresponding to the controller includes the partial view and passes to it the model previously saved in the ViewBag. This makes the partial view generic and reusable from other contexts.

To continue our earlier example, the view corresponding to DeviceController is Device.cshtml and contains roughly the code below. The idea is to first check wether the ViewBag contains our model and if so, then call the partial view _PartialDocument.cshtml with the DocumentPartial model.

@model Device

<!-- markup goes here -->

@if (ViewBag.DocumentPartial != null)
{
@Html.Partial("_PartialDocument", (DocumentPartial)ViewBag.DocumentPartial)
}

<!-- more markup here -->

The main view logic is contained in the partial view. In our example, _PartialDocument.cshtml, processes the PagedItems property and displays the items in that list according to some input parameters. The input parameters can be the current page to display (the page number) or the page size (the number of items per page). Additionally, the partial view displays some kind of page iterator that allows for navigation between pages of items (such as next page, previous page, or jumping to a specific page number). The outline of the Razor partial code is shown below:

@model DocumentPartial
@using PagedList.Mvc;

<div>
Viewing @Model.PagedItems.Count of @Model.PagedItems.TotalItemCount documents

@if (Model.IsPostback)
{
foreach (Asset asset in Model.PagedItems)
{
//Display logic goes here
}

<div>
Page @(Model.PagedItems.PageCount < Model.PagedItems.PageNumber ?
0 : Model.PagedItems.PageNumber)
of @Model.PagedItems.PageCount
</div>

@Html.PagedListPager(Model.PagedItems, pageNumber => Model.GetPagerUrl(pageNumber))
}
</div>

Note the usage of the Html.PagedListPager -- this extension method provided by PagedList.Mvc will generate a page iterator similar to the one below:

<divclass="pagination-container">
<ulclass="pagination">
<liclass="active"><a>1</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/2.ajax">2</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/3.ajax">3</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/4.ajax">4</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/5.ajax">5</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/6.ajax">6</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/7.ajax">7</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/8.ajax">8</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/9.ajax">9</a></li>
<li><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/10.ajax">10</a></li>
<liclass="disabled PagedList-ellipses"><a>&#8230;</a></li>
<liclass="PagedList-skipToNext"><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/2.ajax"rel="next">»</a></li>
<liclass="PagedList-skipToLast"><ahref="http://my.server.com/Device/Index/123/456/Device/_PartialDocument/23.ajax">»»</a></li>
</ul>
</div>


A DD4T.net Implementation - Pagination (AJAX)

$
0
0
This is the third and last post on the topic of implementing a pagination solution for lists of Dynamic Component Presentations retrieved by a query. If you haven't already done so, please read the first two posts to put things in context:

This post deals with client-side Javascript code performing the AJAX call to the server-side code and displaying the returned HTML.

In the current architecture, the server-side components render a snippet of HTML (i.e. the only piece of markup that changes between two page iterations). I use Javascript (JQuery) to replace the content of a container DIV with the HTML returned by the DD4T server-side partials. Since we receive rendered HTML back, it is quite simple to perform the DIV content replacement. No other rendering, parsing or processing is necessary as if we had received JSON back from the server.

I chose this approach because it fits better with the Tridion and DD4T architecture. I can easily reuse the Component Template and its view (and partial views) to render the same snippet of HTML for the paginated AJAX solution as well as for a normal embedded Component Presentation. I can also reuse the Controller, the models and builders. All these factors provide for a great deal of simplicity in coding and ease of maintenance of the code.

The Javascript uses JQuery just for fanciness, but it is actually not necessary to implement a simple AJAX call and a DIV content replacement. The following code example shows the initial mapping of an onclick event on an anchor tag nested inside an element with class pagination-container to a custom Javascript function pagination.

Also the code defines a generic function ajaxCall that performs a call to JQuery's $.ajax method and performs error checks and callback function execution.

$(document).ready(function () {
myApp.init();
});

var myApp = {
init:function () {
$('body').on('click', '.pagination-container a', this.pagination);
},

ajaxCall:function (options, callback) {
return $.ajax({
url: options.url,
type: options.type,
data: options.data || {},
success:function (data) {
if (data.error !==undefined) {
console.log('error');
} else {
callback(data);
}
},
error:function () {
console.log(error);
}
});
},

Note that the DIV with class pagination-container is the output of the extension method Html.PagedListPager presented in the previous post. The onclick event is hooked to all anchor tags under this DIV and when clicked, the function pagination, shows here-below, is called:

    pagination:function () {
var el = $(this),
section = el.parents('.paged-section'),
href = el.attr("href");

myApp.ajaxCall({ url: href, type:'GET' }, function (data) {
section.html(data);
section.find(":first-child").unwrap();
});

returnfalse;
}
}

Function pagination first calls our custom ajaxCall function and passes to it the URL of the anchor tag that was clicked. It also passes an anonymous callback function that the ajaxCall function executes in case the initial GET request to the AJAX URL is successful.

The callback function sets the inner HTML of the DIV with class pages-section (i.e. the container DIV of the paginated content returned by the AJAX Component Presentation). Finally, it needs to remove the first nested DIV under the 'section' DIV and thus executes the JQuery unwrap() function on the first child of the section node. This basically has as effect the removal of the first child DIV under the section node. This step is necessary in order to remove the duplicate DIVs introduced by simply changing the parent DIV innerHTML property.

The code above has been obviously greatly simplified in order to be easily readable and understandable as an example.



A DD4T.net Implementation - A Simple Publication Resolver

$
0
0
In previous posts I referred to a utility method GetPublicationUrl. This method simply looks up the current request path and returns the first two levels under the root. In my current implementation it is these two levels that identify the Publication.

For example, for URL path /zh/cn/products/abc.html, method GetPublicationUrl returns "/zh/ch/".

In different implementations, it could be that only the first level of the URL path represents the Publication URL, but that is a matter of choice and URL design.

The code below show casts the simple GetPublicationUrl method, which applies a regular expression pattern to the given (current) URL path. The regular expression pattern matches only if the path starts with slash and is followed by two characters, followed by slash, followed by two characters, followed by slash. In this case, it extracts the 2 level nested folders and returns them.

publicstaticclassUriHelper
{
privatestaticreadonly Regex PublicationUrlRegex = new Regex("^(/../../).*");

publicstaticstringGetPublicationUrl(string urlPath)
{
string result = "/";
Match match = PublicationUrlRegex.Match(urlPath);
if (match.Success)
{
result = match.Groups[1].Value;
}

return result;
}

publicstaticstringGetPublicationUrl(HttpContextBase httpContext)
{
returnGetPublicationUrl(httpContext.Request.Path);
}

publicstaticstringGetPublicationUrl(HttpContext httpContext)
{
returnGetPublicationUrl(httpContext.Request.Path);
}
}

Why would such a method be helpful? The reason to use such method is support for multiple website. In Tridion, we represent different websites as different Publications and each Publication is identified by an ID and a URL root. If we know, or if we are able to read this Publication URL, then all our pages become relative to this Publication URL. This makes is very easy to just reuse the same application code in order to serve many different websites (or locales). In my current implementation I used GetPublicationUrl with Navigation, Labels, or simply loading any kind of resource that is configurable in a central repository, and I don't need to worry about which repository or context the resource is defined in.

However, this method only resolves half way a Publication URL. In fact, it doesn't actually do any resolving. It simply provides a context for a given general URL. A proper Publication Resolver would provide mappings between Publication ID and Publication URL and vice versa. But more information about a proper Publication Resolver is available in post Dynamic Publication Resolver.


A DD4T.net Implementation - A Dynamic Publication Resolver

$
0
0
In the previous post Simple Publication Resolver, I presented a simplistic way of putting generic pages into Publication context by identifying the Publication URL by just applying a regular expression to the URL path.

In this post, I present a proper Publication Resolver algorithm, for DD4T.net, that dynamically maps Publication IDs to Publication URLs and vice versa.

The interface of a Publication Resolver defines 2 methods: GetPublicationUrl and GetPublicationId, which provide retrieval functionality for mapped Publication ID and URL.

publicinterface IPublicationResolver
{
stringGetPublicationUrl(int publicationId);
intGetPublicationId(string publicationUrl);
}

The implementing class, PublicationResolver, defines an algorithm that dynamically discovers the mapped ID for a given Publication URL and the other way around. It makes use of the Tridion Content Delivery API PublicationMetaFactory to retrieve the PublicationMeta object containing information about the given Publication:

publicclassPublicationResolver : IPublicationResolver
{
privatereadonly IDictionary<string, int> mapUrlToId =
new Dictionary<string, int>();
privatereadonly IDictionary<int, string> mapIdToUrl =
new Dictionary<int, string>();

publicstringGetPublicationUrl(int publicationId)
{
if (mapIdToUrl.ContainsKey(publicationId))
{
return mapIdToUrl[publicationId];
}
else
{
PublicationMetaFactory factory = new PublicationMetaFactory();
PublicationMeta meta = factory.GetMeta(publicationId);
string url = meta == null ? string.Empty : meta.PublicationUrl;

mapIdToUrl[publicationId] = url;
mapUrlToId[url] = publicationId;

return url;
}
}

publicintGetPublicationId(string publicationUrl)
{
if (mapUrlToId.ContainsKey(publicationUrl))
{
return mapUrlToId[publicationUrl];
}
else
{
PublicationMetaFactory factory = new PublicationMetaFactory();
PublicationMeta meta = factory.GetMetaByPublicationUrl(publicationUrl).FirstOrDefault();
int id = meta == null ? 0 : meta.Id;

mapIdToUrl[id] = publicationUrl;
mapUrlToId[publicationUrl] = id;

return id;
}
}
}

The interface and class above can easily be put in Ninject as singleton mapping:

Bind<IPublicationResolver>().To<PublicationResolver>().InSingletonScope();

When using the Publication Resolver, we can easily retrieve Publication ID for a given URL or the other way around. The code below defines property PublicationResolver that is injected into the current class:

[Inject]
publicvirtual IPublicationResolver PublicationResolver { get; set; }

...

string url = PublicationResolver.GetPublicationUrl(44);
int id = PublicationResolver.GetPublicationId("/zh/cn/");


A DD4T.net Implementation - Custom Binary Publisher

$
0
0
The default way to publish binaries in DD4T is implemented in class DD4T.Templates.Base.Utils.BinaryPublisher and uses method RenderedItem.AddBinary(Component). This produces binaries that have their TCM URI as suffix in their filename. In my recent project, we had a requirement that binary file names should be clean (without the TCM URI suffix). Therefore, it was time to modify the way DD4T was publishing binaries.

The method in charge with publishing binaries is called PublishItem and is defined in class BinaryPublisher. I therefore extended the BinaryPublisher and overrode method PublishItem.

publicclassCustomBinaryPublisher : BinaryPublisher
{
private Template currentTemplate;
private TcmUri structureGroupUri;

In its simplest form, method PublishItem just takes the item and passes it to the AddBinary. In order to accomplish the requirement, we must specify a filename while publishing. This is the file name part of the binary path of Component.BinaryContent.Filename.

In case there is a Structure Group specified, we use the AddBinary method that takes this into account.

protectedoverridevoidPublishItem(Item item, TcmUri itemUri)
{
string url;
Stream stream = item.GetAsStream();
RenderedItem renderedItem = engine.PublishingContext.RenderedItem;
Component component = engine.GetObject(item.Properties["TCMURI"]) as Component;
BinaryContent binaryContent = component.BinaryContent;
string mimeType = binaryContent.MultimediaType.MimeType;
string fileName = Path.GetFileName(binaryContent.Filename);

if (string.IsNullOrEmpty(structureGroupUri))
{
url = renderedItem.AddBinary(stream, fileName, string.Empty, component, mimeType).Url;
}
else
{
StructureGroup structureGroup = engine.GetObject(structureGroupUri) as StructureGroup;
url = renderedItem.AddBinary(stream, fileName, structureGroup, string.Empty, component, mimeType).Url;
}

item.Properties["PublishedPath"] = url;
}

A word of caution: the code above causes publishing to fail in case there are more than one Multimedia Components that contain a binary with the same name. This is because publishing the two Multimedia Components would in face result in the binaries being overwritten on disk.

The error will show up as "Phase: Deployment Prepare Commit Phase failed. Unable to prepare transaction" in the Publishing Queue. On a closer inspection, the core log on the Deployer side will show error "ProcessingException: Attempting to deploy a binary xxx to a location where a different binary is already stored Existing binary: yyy", which means exactly that -- a binary with id yyy is already published with the same name and in the same location as binary with id xxx currently being deployed.

However, this might not be an issue, if all Multimedia Components in the system have different file names (i.e. different binary file names).

The current custom binary publisher is however not used in this way. In a follow-up post, I will present a way to call the PublishItem method from a Template Building Block (TBB).



A DD4T.net Implementation - Custom Binary Publisher (part 2)

$
0
0
In previous post Custom Binary Publisher, I presented the main logic needed to publish our Multimedia Components using custom code in DD4T .net. In this post, I present the Template Building Blocks (TBB) that call the custom binary publisher.

If you take a closer look at the code, you will notice it is basically the same code as the existing TBBs PublishBinariesComponent and PublishBinariesPage. I just created a separate PublishBinariesHelper class that uses the CustomBinaryPublisher described earlier. Calling methods PublishMultimediaComponent and PublishBinariesInRichTextField will call the overridden method PublishItem.

publicclassPublishBinariesHelper
{
privatereadonly CustomBinaryPublisher binaryPublisher;

publicPublishBinariesHelper(Package package, Engine engine)
{
binaryPublisher = new CustomBinaryPublisher(package, engine);
}

publicvoidPublishAllBinaries(Component component)
{
if (component.ComponentType == ComponentType.Multimedia)
{
component.Multimedia.Url = binaryPublisher.PublishMultimediaComponent(component.Id);
}

PublishAllBinaries(component.Fields);
PublishAllBinaries(component.MetadataFields);
}

publicvoidPublishAllBinaries(Page page)
{
PublishAllBinaries(page.MetadataFields);
}

privatevoidPublishAllBinaries(FieldSet fieldSet)
{
foreach (IField field in fieldSet.Values)
{
switch (field.FieldType)
{
case FieldType.ComponentLink:
case FieldType.MultiMediaLink:
foreach (IComponent component in field.LinkedComponentValues)
{
PublishAllBinaries(component as Component);
}
break;

case FieldType.Embedded:
foreach (FieldSet embeddedSet in field.EmbeddedValues)
{
PublishAllBinaries(embeddedSet);
}
break;

case FieldType.Xhtml:
for (int i = 0; i < field.Values.Count; i++)
{
field.Values[i] = binaryPublisher.PublishBinariesInRichTextField(field.Values[i]);
}
break;
}
}
}
}

Next, we create the actual TBB classes that use the PublishBinariesHelper -- CustomPublishBinariesComponent and CustomPublishBinariesPage, which extend their DD4T counterparts PublishBinariesComponent and PublishBinariesPage.

publicclassCustomPublishBinariesComponent : PublishBinariesComponent
{
protectedoverridevoidTransformComponent(Component component)
{
PublishBinariesHelper helper = new PublishBinariesHelper(Package, Engine);
helper.PublishAllBinaries(component);
}
}

publicclassCustomPublishBinariesPage : PublishBinariesPage
{
protectedoverridevoidTransformPage(Page page)
{
PublishBinariesHelper helper = new PublishBinariesHelper(Package, Engine);
helper.PublishAllBinaries(page);
}
}

Finally, use the custom TBB classes in your Component Template and Page Template, instead of the default DD4T PublishBinariesComponent and PublishBinariesPage TBBs.


A DD4T.net Implementation - Extending the LinkFactory

$
0
0
In my current DD4T .net implementation, I encountered the requirement of having Rich Text Format fields resolved in a custom way. Namely, I had to check whether the type of a RTF Component link is of a wrapper around the actual Component to link to, and if so, resolve the link to the wrapped Component.

An elegant way of doing to is to extend the default DD4T.ContentModel.Factories.LinkFactory and create a new method ResolveRTFLink(string componentUri):

publicinterface IMyLinkFactory : ILinkFactory
{
stringResolveRTFLink(string componentUri);
}

The implementing class would extend DD4T LinkFactory and implement IMyLinkFactory:

publicclassMyLinkFactory : LinkFactory, IMyLinkFactory
{
[Inject]
publicvirtual IComponentFactory ComponentFactory { get; set; }

[Inject]
publicvirtual IModelFactory ModelFactory { get; set; }

At the same time the class must provide an implementation for ResolveRTFLink method:

publicstringResolveRTFLink(string componentUri)
{
string key = string.Format("Link_{0}", componentUri);
string link = (string)CacheAgent.Load(key);

if (link == null)
{
link = GetRTFUrl(componentUri);
if (link == null)
{
returnResolveLink(componentUri);
}
CacheAgent.Store(key, "Link", link);
}

return"UnresolvedLink".Equals(link) ? null : link;
}

privatestringGetRTFUrl(string componentUri)
{
ModelBase model = ModelFactory.TryGetModel<ModelBase>(componentUri);
// custom logic to resolve link to 'model'

return resolvedLink;
}

The mapping of interface to factory singleton instance is specified using Ninject, in class DD4TNinjectModule.cs:

Bind<IMyLinkFactory>().ToMethod(context => new MyLinkFactory()
{
LinkProvider = context.Kernel.Get<IMyLinkProvider>()
}).InSingletonScope();

Next, we need to call the method ResolveRTFLink. We do this from a helper class that I shamelessly copied from the DD4T libraries. The only modification is to call my own custom resolve method. As you can see this is an extension method for class string:

publicstatic MvcHtmlString ResolveRichText(thisstringvalue)
{
XmlDocument document = new XmlDocument();
XmlNamespaceManager namespaceManager = new XmlNamespaceManager(document.NameTable);
namespaceManager.AddNamespace("xhtml", XHTML_NAMESPACE_URI);
namespaceManager.AddNamespace("xlink", XLINK_NAMESPACE_URI);
document.LoadXml(string.Format("<xhtmlroot>{0}</xhtmlroot>", value));

foreach (XmlNode node in document.SelectNodes("//xhtml:a[@xlink:href[starts-with(string(.),'tcm:')]][@xhtml:href='' or not(@xhtml:href)]", namespaceManager))
{
string componentUri = node.Attributes["xlink:href"].Value;
string url = LinkFactory.ResolveRTFLink(componentUri);

if (string.IsNullOrEmpty(url))
{
foreach (XmlNode childNode in node.ChildNodes)
{
node.ParentNode.InsertBefore(childNode.CloneNode(true), node);
}
node.ParentNode.RemoveChild(node);
}
else
{
XmlAttribute href = document.CreateAttribute("xhtml:href");
href.Value = url;
node.Attributes.Append(href);
foreach (XmlAttribute attribute in node.SelectNodes("//@xlink:*", namespaceManager))
{
node.Attributes.Remove(attribute);
}
}
}

foreach (XmlNode node in document.SelectNodes("//*[@xlink:*]", namespaceManager))
{
foreach (XmlAttribute attribute in node.SelectNodes("//@xlink:*", namespaceManager))
{
node.Attributes.Remove(attribute);
}
}

returnnewMvcHtmlString(RemoveNamespaceReferences(document.DocumentElement.InnerXml));
}

Finally, all I'm left to do is to call the string extension method whenever resolving an RTF field is required. Namely, this is done in one place only -- in extension methods in class IFieldSetExtenstionMethods.cs as described in post IFieldSet Extension Methods:

publicstatic IList<string> ResolveRichTexts(this IFieldSet fieldSet, string fieldName)
{
returnStringValues(fieldSet, fieldName).Select(x => x.ResolveRichText().ToString()).ToList();
}

publicstaticstringResolveRichText(this IFieldSet fieldSet, string fieldName)
{
stringvalue = StringValues(fieldSet, fieldName).FirstOrDefault<string>();
returnvalue == null ? null : value.ResolveRichText().ToString();
}



A DD4T.net Implementation - IIS URL Rewrite

$
0
0
I have used IIS's URL Rewrite module in several of my .net projects. It is a very neat module that gives a lot of URL rewrite/redirect functionality out-of-the-box. Namely, the module can do:
  • URL rewrite -- rewrite the URL path before request processing starts (similar to a server transfer);
  • URL redirect -- redirects the client browser to a modified URL by sending back redirect HTTP status codes;
  • Rewrite outgoing URLs in the response body;
My requirements have so far involved using URL rewrite together with the outgoing URLs rewrite in the response. For example, I had recently the following use case -- my client has legacy .aspx pages under location /devices. There are new DD4T .html pages in the system, but because of some routing restrictions they had to be placed under a temporary location /device (notice the difference from /devices). The .html pages should, however, be exposed to the internet as if they belonged to folder /devices.

Example: /device/page.html should be accessible to the Internet as /devices/page.html, even though in Tridion, the page is under Structure Group "/device".

Enter IIS URL Rewrite. I set a rewrite rule in web.config that takes a URL starting with /devices/ and rewrites it server side to /device/ followed by the same sub-path and .html page name. The rule uses regular expressions and replacement groups:

<system.webServer>
<rewrite>
<rules>
<rulename="'devices' to 'device'"stopProcessing="true">
<matchurl="^devices/(.*\.html)$"/>
<conditionslogicalGrouping="MatchAll"trackAllCaptures="false"/>
<actiontype="Rewrite"url="device/{R:1}"/>
</rule>
...

Using the rule above a certain page under /device is now also available under /devices. From an SEO perspective, this is something to avoid. The same page should not be accessible on the same website under more than one URL. So the next rule takes care of that. It forces direct access to pages under /device to yield HTTP status 404.

<rulename="'device' yields 404"stopProcessing="true">
<matchurl="^device/.*\.html"/>
<conditionslogicalGrouping="MatchAll"trackAllCaptures="false"/>
<actiontype="CustomResponse"statusCode="404"statusReason="Page Not Found"
statusDescription="Page Not Found"/>
</rule>

There is one more issue left to deal with -- Tridion-resolved Component links to the page under /device. These links will be resolved to the actual URL of the page in Tridion, i.e. /device/page.html. But we want this URL to be exposed as /devices/page.html.

I chose to use URL Rewrite module to rewrite these links in the response body. So the rule below intercepts only anchor href links to the /device/ URL and rewrites them to /devices/. The rewrite occurs using regular expressions and replacement groups, but it is restricted to only those responses that have content mime-type text/html. This done simply for performance reasons, so we don't try accidentally to rewrite a binary response, for example.

<outboundRules>
<rulename="'device' to 'devices'"preCondition="IsHTML"
enabled="true"stopProcessing="false">
<matchfilterByTags="A"pattern="^/device/(.*)$"/>
<actiontype="Rewrite"value="/devices/{R:1}"/>
<conditionslogicalGrouping="MatchAny"/>
</rule>

<preConditions>
<preConditionname="IsHTML"logicalGrouping="MatchAny">
<addinput="{RESPONSE_CONTENT_TYPE}"pattern="^text/html"/>
</preCondition>
</preConditions>
</outboundRules>

Redirect Directories to index.html

Take the following rule as a bonus example. It redirects the client browser to the /index.html in case the requested URL is to a directory. The rule is a bit more complex, so I'll explain the regular expression ^((|devices)[^\.]*)(\/)?$ :

  • ^(|devices) match URL path that starts either with nothing OR 'devices' -- the match on nothing is for home pages, which don't have a URL path at all;
  • [^\.]* followed by zero or many characters that are not a dot (.) -- this indicates our request URL path is a directory (i.e. it doesn't contain an extension);
  • (\/)?$ URL path ends with an optional forward slash (/);

<rulename="Redirect directories to /index.html"stopProcessing="true">
<matchurl="^((|devices)[^\.]*)(\/)?$"/>
<conditionslogicalGrouping="MatchAll"trackAllCaptures="false">
</conditions>
<actiontype="Redirect"url="{R:1}/index.html"/>
</rule>



Create EHCache Programmatically

$
0
0
I ran into an issue recently when my EH Caches that I wanted to create and configure using the ehcache.xml was already created. In this situation an exception is thrown by the EHCache cache manager and execution halts.

Alternatively, when it's not possible to use a configuration file ehcache.xml to define the caches, one must resort to creating them programmatically. This is the object of this post.

The code below first checks whether there is a configured cache with the given name, and if not, it proceeds to create one programmatically.


CacheFactory(){
CacheManager cacheManager = CacheManager.create();

if(!cacheManager.cacheExists("myCache")){
cacheManager.addCache(
newCache(
newCacheConfiguration("myCache",maxEntriesLocalHeap)
.memoryStoreEvictionPolicy(MemoryStoreEvictionPolicy.LRU)
.eternal(false)
.timeToLiveSeconds(timeToLiveSeconds)
.timeToIdleSeconds(timeToIdleSeconds)
)
);
}
}

The same code above can be replaced by the following ehcache.xml below in order to create the cache myCache:

<ehcache>
<cachename="myCache"
eternal="false"
maxEntriesLocalHeap="20000"
timeToLiveSeconds="3600"
timeToIdleSeconds="3600"
memoryStoreEvictionPolicy="LRU">
</cache>
</ehcache>




Maven Release Plugin

$
0
0
This post describes the installation and usage of the Maven Release Plugin. The plugin uses behind the scene a GIT repository configured in the SCM (Source Code Management) section of the POM (Project Object Model).

In your project's main pom.xml, add the following plugin inside you build / pluginManagement / plugins node.

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.5.3</version>
<configuration>
<goals>deploy</goals>
<autoVersionSubmodules>true</autoVersionSubmodules>
</configuration>
<dependencies>
<dependency>
<groupId>org.apache.maven.scm</groupId>
<artifactId>maven-scm-api</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.apache.maven.scm</groupId>
<artifactId>maven-scm-provider-gitexe</artifactId>
<version>1.8.1</version>
</dependency>
</dependencies>
</plugin>

Although there is no explicit reference to the Maven SCM plugin, the Release plugin makes use of the SCM API and GIT provider packages.

Further in the pom.xml, we define the GIT repository to use:

<scm>
<developerConnection>scm:git:https://server/your-repository.git</developerConnection>
<tag>HEAD</tag>
</scm>

If distribution of binaries is enabled (e.g. using Artifactory), you can configure this under node distributionManagement in your pom.xml:

<distributionManagement>
<snapshotRepository>
<id>my-artifactory</id>
<url>http://my-server/snapshot</url>
</snapshotRepository>
<repository>
<id>my-artifactory</id>
<url>http://my-server/release</url>
</repository>
</distributionManagement>

Configure the credentials for repository in your ~/.m2/settings.xml file:

<server>
<id>my-artifactory</id>
<username>my-user</username>
<password>UNENCRYPTED_PASSWORD</password>
</server>

Using the Maven Release Plugin

1. Make sure that:
a) all your stuff is checked in. Releases cannot be performed if any file not checked in/committed;
b) you have Java 1.8 and your JAVA_HOME points to it;

2. Go to command line (either PC or Mac… the commands are the same)

3. Go to the project you want to release (to root folder that contains the main pom.xml).
E.g.:
/Projects/rootFolder

4. Run command
mvn release:prepare

There will be tons of stuff going on, but this is an interactive process. You can select the release name, new SCM code name, next version development name, etc. AFAIK, all the defaults are fine. So don’t mess with this unless there is some good reason (such as you want to change the release name because something happened and a name is already taken)

5A. If command at #4 executed successfully, run command:
mvn release:perform

This will do the actual release, check-in stuff, upload bunch of deliverables to artifactory, etc.
If this step is successful, you’re done. No need for anything else. Check SourceTree (browse GIT) and marvel at your new release.
Note that currently the release is done in GIT under the name of the current user. This can be changed and be performed by some service account (but that is not the case for now). So the user running this command has to be able to check-out/check-in stuff in GIT.

5B. If command at #4 or #5A failed, run command:
mvn release:rollback

This will delete a bunch of temporary files and put the old version back in the POMs. You should run rollback before correcting any errors thrown during the release. Only after running ‘rollback’, proceed to change, fix and checkin/commit stuff.

6. Optional - run command:
mvn release:clean

This deletes a bunch of temporary directories used during the release/deploy phases. Not mandatory.



Deployer Extension - Handle Binary Unpublish

$
0
0
In case you have written any SDL Tridion Content Delivery Deployer extensions, you have noticed there is no easy way in intercepting the unpublish/undeploy of a binary. This blog post shows how to intercept such an action and execute your custom code on it.

The reason why it is hard to intercept a binary undeploy is because in fact the binary remove does not happen at Deployer level; rather, it takes place in the storage level. So the extension point to be used in not a Deployer extension, but a storage FileSystem or JPA extension.

File System

The following code implements a storage extension that intercepts the removal of a binary from the File System Content Data Storage (fka the File system broker):

package com.tridion.storage.toolkit;

@Component("FSBinaryDAOExtension")
@Scope("prototype")
publicclassFSBinaryDAOExtensionextends FSBinaryContentDAO implements BinaryContentDAO {

@Override
publicvoidremove(int publicationId,int binaryId, String variantId,
            String relativePath)throws StorageException {
// Your custom code goes here

super.remove(publicationId, binaryId, variantId, relativePath);

// or here
}
}

Notice the package must start with com.tridion.storage. Without it, it will not be found during the storage module initialization.

Depending on the requirement, you can place your custom code before or after the call to super.remove.

Database Storage (JPA)

The code below implements a storage extension that uses the Database storage for Content Delivery:

package com.tridion.storage.toolkit;

@Component("JPABinaryDAOExtension")
@Scope("prototype")
publicclassJPABinaryDAOExtensionextends JPABinaryContentDAO implements BinaryContentDAO {

@Override
publicvoidremove(int publicationId,int binaryId, String variantId,
            String relativePath)throws StorageException {
// your custom code goes here

super.remove(publicationId, binaryId, variantId, relativePath);

// or here
}
}

Also note the package must start with com.tridion.storage.

Bundle XML Descriptor

We must configure the custom classes in a bundle XML descriptor file. Below is such a file, in my case called toolkit_dao_bundle.xml:

<?xml version="1.0" encoding="UTF-8"?>
<StorageDAOBundles>
<!-- Filesystem mappings -->
<StorageDAOBundletype="filesystem">
<StorageDAOtypeMapping="Binary"
                    class="com.tridion.storage.toolkit.FSBinaryDAOExtension"/>
</StorageDAOBundle>

<!-- Java Persistence API mappings -->
<StorageDAOBundletype="persistence">
<StorageDAOtypeMapping="Binary"
                    class="com.tridion.storage.toolkit.JPABinaryDAOExtension"/>
</StorageDAOBundle>
</StorageDAOBundles>

Place the bundle configuration XML file either on the class-path of your Deployer, or package it in the root position inside your extension JAR.

Final Configuration

The final configuration must be made in cd_storage_conf.xml of you Deployer. Add the following line inside node Global / Storages / StorageBindings:

<Storages>
<StorageBindings>
<Bundlesrc="toolkit_dao_bundle.xml"/>
</StorageBindings>
...

Restart the Deployer.

With all these configuration and code in place, your custom binary handling code should be called when a Binary is unpublished/undeployed. Remember that in Tridion binaries are only unpublished when they are not referenced anymore by any published Component.



.Net Callback from Java Cache Invalidate Message

$
0
0
The problem this post solves is as follows: the Content Delivery API has a mechanism called Cache Decorator, which allows the implementor to define custom code that will be called when things are added/removed/flushed from the internal Tridion Broker Cache. This mechanism allows external actors to interact and synchronize with the Tridion Broker Cache.

Namely, a DD4T cache should listen to Tridion Broker Cache messages in order to invalidate its own entries. In Java world, this is simply implemented by extending the CacheDecorator Tridion class. However, in a .NET environment, there is no easy way of intercepting Java method execution back in .NET. This is precisely what this post describes.

In a .NET architecture, the Tridion Content Delivery API runs as a Java byte code embedded inside the .NET runtime inside IIS worker process. Communication between .NET and Java happens using a standard called JNI (Java Native Interfaces). Namely the .NET runtime calls Java code that is mapped class to class and method to method from .NET classes/methods to their Java counterparts.

The solution this post presents enables communication from the Java embedded API back into .NET by means of defining a custom interface that is implemented in both Java and .NET and mapping it as callback method using JNI (namely JuggerNET package used internally by the Content Delivery framework).

There are two assumptions in this approach:
  • there is a Content Delivery stack present in your web application. In other words, the web application does not use the CIL (or REST client providers) or SDL Web 8 for DD4T or DXA. In that case a simple Time-To-Live cache would suffice;
  • the Cache Channel Service is already configured and running. The invalidate messages coming from the Cache Channel Service are received in the Tridion Content Delivery stack by either means of RMI or JMS;
The entire code presented below is available in this blog's GIT repository. You can find there both Java IntelliJ and Visual Studio .NET projects. Link here.

Java Stuff

Lets start with the Java part, with a class called DD4TDependencyTracker that extends class com.tridion.DependencyTracker.

package com.tridion.cache;

importcom.mihaiconsulting.cache.CacheNotifier;

publicclassDD4TDependencyTrackerextends DependencyTracker {

publicDD4TDependencyTracker(CacheProcessor cache, Region region){
super(cache, region);
}

@Override
publicbooleanprocessRemove(CacheElement element,boolean force)
throws CacheException {
if(element !=null){
String key = element.getKey().toString();
CacheNotifier.getInstance().notify(key);
}

returnsuper.processRemove(element, force);
}
}

The custom processRemove method invokes a notify method on a CacheNotifier class. The notifier class is simply a singleton that proxies the notify call to a specially crafter CacheInvalidator class that makes use of JNI (JuggerNet) code to register itself as a proxy for a .NET counterpart class.

packagecom.mihaiconsulting.cache;

publicclassCacheNotifier{

privatestaticfinal CacheNotifier instance =new CacheNotifier();
private CacheInvalidator invalidator =new CacheInvalidator(){
@Override
publicvoidinvalidate(String key){
//Using empty CacheInvalidator
}
};

privateCacheNotifier(){
}

publicstatic CacheNotifier getInstance(){
return instance;
}

publicvoidsetInvalidator(CacheInvalidator invalidator){
if(invalidator ==null){
thrownewIllegalStateException("Invalid null CacheInvalidator");
}

this.invalidator= invalidator;
}

publicvoidnotify(String key){
invalidator.invalidate(key);
}
}

CacheInvalidator is an interface defining an invalidate method. In Java, we define an implementing class that makes the actual (callback) call into .NET using JNI. We will later see that the same CacheInvalidator interface has a counterpart interface in .NET. All implementations on the .NET CacheInvalidator interface will be called when the Java callback is called.

The following implementing class, CacheInvalidatorCallback is only given for reference purpose only. It contains specific code from JuggerNet and its relevance to this post is minor. The actual code is available in the full solution in GIT.

package com.mihaiconsulting.cache;

publicclassCacheInvalidatorCallbackimplements CacheInvalidator {

@Override
publicvoidinvalidate(String key){
Value value =new Value();
Value.callback_opt(this, _xmog_inst, _xmog_cbs[0], value, key);
}
}

.NET Stuff

In .NET, we define proxy classes for each class and interface from Java. These are specially crafted objects that use JNI (JuggerNet API) to define counterparts for their Java correspondents.

Interface ICacheInvalidator declared below defines the Invalidate method that an implementing class will have it called by the JNI callback:

namespaceCom.MihaiConsulting.Cache
{
publicinterface ICacheInvalidator
{
voidInvalidate(string key);
}
}

Class CacheNotifier maps the Java counterpart class and defines proxy methods for the Instance and Invalidator properties:

usingCodemesh.JuggerNET;

namespaceCom.MihaiConsulting.Cache
{
publicclassCacheNotifier : JuggerNETProxyObject
{
privatestatic JavaClass _cmj_theClass = JavaClass.RegisterClass("com.mihaiconsulting.cache.CacheNotifier", typeof(CacheNotifier));
privatestatic JavaMethod _getInstance = new JavaMethod(_cmj_theClass, typeof(CacheNotifier), "getInstance", "()Lcom/mihaiconsulting/cache/CacheNotifier;", true, false, false);
privatestatic JavaMethod _setInvalidator = new JavaMethod(_cmj_theClass, typeof(void), "setInvalidator", "(Lcom/mihaiconsulting/cache/CacheInvalidator;)V", false, false, false);

publicCacheNotifier(JNIHandle objectHandle) : base(objectHandle) { }

publicstatic CacheNotifier Instance
{
get
{
return (CacheNotifier)_getInstance.CallObject(null, typeof(CacheNotifier), false);
}
}

public ICacheInvalidator Invalidator
{
set
{
jvalue[] cmj_jargs = new jvalue[1];
using (JavaMethodArguments cmj_jmargs = new JavaMethodArguments(cmj_jargs).Add(value, typeof(ICacheInvalidator)))
{
_setInvalidator.CallVoid(this, cmj_jmargs);
}
}
}
}
}

Class CacheInvalidatorCallback is the one defining the mapping between .NET and Java for the call back instance. This mapping is responsible for calling the Invalidate method on the .NET class provided as parameter in the constructor:

usingCodemesh.JuggerNET;
usingSystem;
usingSystem.IO;
usingSystem.Reflection;

namespaceCom.MihaiConsulting.Cache
{
publicclassCacheInvalidatorCallback : JuggerNETProxyObject
{
privatestatic JavaClass _cmj_theClass = new JavaClass("com/mihaiconsulting/cache/CacheInvalidatorCallback", typeof(CacheInvalidatorCallback));
privatestatic JavaMethod _constructor;
private GenericCallback _callback;
private ICacheInvalidator _invalidator;

staticCacheInvalidatorCallback()
{
using (Stream resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream("Com.MihaiConsulting.Cache.JuggerNET.CacheInvalidatorCallback.class"))
{
Byte[] buffer = new Byte[resourceStream.Length];
resourceStream.Read(buffer, 0, (int)resourceStream.Length);

_cmj_theClass.ByteCode = buffer;
}

_constructor = new JavaMethod(_cmj_theClass, null, "<init>", "(J[J)V", false);
}

publicCacheInvalidatorCallback(ICacheInvalidator cacheInvalidator)
{
_invalidator = cacheInvalidator;
_callback = new GenericCallback((outint return_type, out jvalue return_value, IntPtr input) =>
{
try
{
string key = (string)JavaClass.GetTypedInstance(typeof(string), jvalue.From(input).l);
_invalidator.Invalidate(key);

return_value = new jvalue();
return_type = 0;
}
catch (Exception exception)
{
return_value = jvalue.CreateCBRetVal(exception);
return_type = 1;
}

return0;
});

base.JObject = _constructor.construct(0, _callback);
}
}
}

Class CacheInvalidator brings all mappings together -- ICacheInvalidator, CacheInvalidator, CacheInvalidatorCallback and the Java counterpart for CacheInvalidator:

usingCodemesh.JuggerNET;

namespaceCom.MihaiConsulting.Cache
{
publicclassCacheInvalidator : JuggerNETProxyObject
{
privatestatic JavaClass _cmj_theClass = JavaClass.RegisterClass(
"com.mihaiconsulting.cache.CacheInvalidator",
typeof(ICacheInvalidator),
typeof(CacheInvalidator),
typeof(CacheInvalidatorCallback));

publicCacheInvalidator(JNIHandle objectHandle) : base(objectHandle) { }
}
}

The code above is available in this blog's GIT repository. You can find there both Java IntelliJ and Visual Studio .NET projects. Link here.



Usage and Configuration of the .NET Cache Invalidation Callback

$
0
0
In the previous post, I described a way to intercept in .NET cache invalidate events generated in Java. This post presents the way to configure, use and write your own code that handles the Java generated invalidate events.

The entire code presented here is available on this blog's GIT repository. Link here.

Java Stuff

Modify your cd_storage_conf.xml (located in your .net website /bin/conf folder), and reference class com.tridion.cache.DD4TDependencyTracker as the Feature inside node ObjectCache:

<ConfigurationVersion="7.1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="schemas/cd_storage_conf.xsd">
<Global>
<ObjectCacheEnabled="true">
<PolicyType="LRU"Class="com.tridion.cache.LRUPolicy">
<ParamName="MemSize"Value="512mb"/>
</Policy>

<Features>
<!--
<Feature Type="DependencyTracker" Class="com.tridion.cache.DependencyTracker"/>
-->
<FeatureType="DependencyTracker"Class="com.tridion.cache.DD4TDependencyTracker"/>
</Features>
...

Copy file cache-notifier.jar to your website's /bin/lib folder.

.NET Stuff

In your .NET project, copy DLL Com.MihaiConsulting.CacheNotifier.dll and reference it from your project.

Create a class that implements Com.MihaiConsulting.Cache.ICacheInvalidator interface:

usingCom.MihaiConsulting.Cache;

namespaceMyCacheInvalidator.Example
{
publicclassMyCacheInvalidator : ICacheInvalidator
{
publicMyCacheInvalidator()
{
CacheNotifier.Instance.Invalidator = this;
}

publicvoidInvalidate(string key)
{
//perform your own cache removal here
//cache.Remove(key);
}
}
}

This class must be set as the 'Invalidator' to call when a message comes from the CacheNotifier on the Java side. The simplest way to achieve this is to use some Dependency Injection framework and set this instance as the Invalidator for the .NET proxy Com.MihaiConsulting.Cache.CacheNotifier. In the absence of such a DI framework, the simplest solution is to set 'this' as the Invalidator at the moment of constructing the instance:

            CacheNotifier.Instance.Invalidator = this;

At this point, the method Invalidate of class MyCacheInvalidator will be called when a message is sent to the DD4TDependencyTracker in Java for method processRemove().



Implementing Experience Manager in DD4T 2 .NET

$
0
0
This post describes an implementation of Experience Manager (XPM) for DD4T 2 .NET using the CIL (REST) providers in SDL Web 8. The implementation would be basically the same in a more standard architecture where the Tridion Content Delivery stack would be used.

XPM Configuration Switch

In you web application's web.config, add the following line inside the configuration / appSettings node:

<addkey="DD4T.IsPreview"value="true"/>

Only if this switch is present and set to true, the XPM markup will be generated in your HTML sources.

Models Ready for XPM

In order for XPM utility methods provided by DD4T out of the box to work, the models must:
  • implement interface DD4T.Core.Contracts.ViewModels.IViewModel;
  • annotate their Tridion fields with attributes;
The code below shows a simple model that is XPM ready:

usingDD4T.ContentModel;
usingDD4T.Core.Contracts.ViewModels;
usingDD4T.Mvc.ViewModels.Attributes;
usingDD4T.ViewModels.Attributes;
usingSystem.Web.Mvc;

publicclassArticle : IViewModel
{
[TextField(FieldName = "title")]
publicstring Title { get; set; }

[RichTextField(FieldName = "summary")]
public MvcHtmlString Summary { get; set; }

public IModel ModelData { get; set; }
}

Notice the attributes TextField and RichTextField mapping the Tridion XML names to the model properties.

Also, the model declares property ModelData of type IModel inherited from interface IViewModel. It must be assigned either a DD4T.ContentModel.IComponentPresentation or DD4T.ContentModel.IPage object.

XPM Code for the Views

The following code will enable XPM Javascript output in your Razor Views. Several Page View and Component Views must be edited to enable XPM markup.

Page View

The following code snippet must go in a Page View at the end of the HTML document, just above the ending of the </body> tag.

@using DD4T.Mvc.ViewModels.XPM

if (XpmExtensions.XpmMarkupService.IsSiteEditEnabled())
{
@Html.Raw(
XpmExtensions.XpmMarkupService.RenderXpmMarkupForPage(
Model, "http://tridion-cme.com"
)
);
}

You can notice the if statement surrounding the XpmExtensions output. This is required at the moment of writing this post because the method RenderXpmMarkupForPage does not check the enable status of XPM. (This should change in a subsequent DD4T release).

The code above should generate the following HTML output (or similar):

<!-- Page Settings: {"PageID":"tcm:9-259-64","PageModified":"2016-08-04T14:43:31","PageTemplateID":"tcm:9-264-128","PageTemplateModified":"2016-10-18T12:37:03"} -->
<script type="text/javascript"language="javascript"defer="defer"src="http://tridion-cme.com/WebUI/Editors/SiteEdit/Views/Bootstrap/Bootstrap.aspx?mode=js"id="tridion.siteedit"></script>

Component View

The following code goes in the Component Views. In order for the XPM logic to work properly when moving/delimiting Component Presentations, make sure to surround your CP output by an HTML tag (usually div).

@using DD4T.Mvc.ViewModels.XPM;

<div>
@if (XpmExtensions.XpmMarkupService.IsSiteEditEnabled())
{
@Model.StartXpmEditingZone()
}
...
</div>

The StartXpmEditingZone method is defined in class DD4T.Mvc.ViewModels.XPM.XpmExtensions and is an extension method for object based on interface DD4T.Core.Contracts.ViewModels.IViewModel. This implies your 'model' must be implementing interface IViewModel.

You can notice the if statement surrounding the StartXpmEditingZone output. This is required at the moment of writing this post because the method does not check the enable status of XPM. (This should change in a subsequent DD4T release).

The code above will generate the following HTML (or similar):

<!-- Start Component Presentation: {"ComponentID" : "tcm:9-266",
"ComponentModified" : "2016-10-18T13:28:10", "ComponentTemplateID" : "tcm:9-264-32",
"ComponentTemplateModified" : "2016-07-26T10:26:25", "IsRepositoryPublished" : false} -->

Text Field

The following code outputs special HTML markup that enables XPM in-line editing for a text field:

@using DD4T.Mvc.ViewModels.XPM;
@model Article

<h1>
@Model.XpmEditableField(m => m.Title)
</h1>

Extension method XpmEditableField outputs the XPM HTML markup as well as the value of the model field.

There is another extension method, XpmMarkupFor, that only outputs the XPM HTML markup. This is perhaps better suited for outputting XHTML rich-text fields, where we can do more processing on the rich-text value before outputting it.

The code above will generate the following HTML:

<h1>
<!-- Start Component Field: {"XPath":"tcm:Content/custom:Content/custom:title"} -->About Us
</h1>

Embedded Field

Outputting XPM markup for in-line editable embedded fields, single or multi-valued, is a bit trickier. The code looks similar to the following snippet:

@using DD4T.Mvc.ViewModels.XPM;
@model Article

@foreach (EmbeddedParagraph paragraph in Model.Paragraphs)
{
<span>
@paragraph.XpmMarkupFor(m => m.Text)
@Html.Raw(paragraph.Text)
</span>
}

Notice the construct on the embedded paragraph of type EmbeddedParagraph. We use the XPM extension method XpmMarkupFor on the paragraph model itself.

The code above generates the following HTML output:

<span>
<!-- Start Component Field: {"XPath":"tcm:Content/custom:Content/custom:paragraph[1]/custom:text"} -->
Text <b>goes</b> here
</span>

Conclusion

At this moment, if you followed the steps so far, you will have a working XPM implementation without the need of republishing anything. One thing that at this moment still doesn't work is Session Preview.

In other words, what will for at this moment:
  • Enable/disable XPM functionality on a Page in your website;
  • Move/Reorder Component Presentations on the Page;
  • Add existing Component Presentation on the Page;
  • Create new content and add it to the Page;
For short, all editorial functionality of XPM will be there.

What will not work at the moment is Fast Track Publishing, also known as Session Preview, also known as Update Preview. But, more about that, in the next post.



Viewing all 215 articles
Browse latest View live